00:00:00.001 Started by upstream project "autotest-nightly" build number 3889 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3269 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.135 Using shallow fetch with depth 1 00:00:00.135 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.135 > git --version # timeout=10 00:00:00.153 > git --version # 'git version 2.39.2' 00:00:00.153 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.818 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.829 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.841 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.841 > git config core.sparsecheckout # timeout=10 00:00:03.851 > git read-tree -mu HEAD # timeout=10 00:00:03.868 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.885 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.885 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.969 [Pipeline] Start of Pipeline 00:00:03.985 [Pipeline] library 00:00:03.986 Loading library shm_lib@master 00:00:03.986 Library shm_lib@master is cached. Copying from home. 00:00:03.999 [Pipeline] node 00:00:04.008 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.009 [Pipeline] { 00:00:04.017 [Pipeline] catchError 00:00:04.018 [Pipeline] { 00:00:04.031 [Pipeline] wrap 00:00:04.041 [Pipeline] { 00:00:04.050 [Pipeline] stage 00:00:04.051 [Pipeline] { (Prologue) 00:00:04.067 [Pipeline] echo 00:00:04.068 Node: VM-host-SM9 00:00:04.073 [Pipeline] cleanWs 00:00:04.082 [WS-CLEANUP] Deleting project workspace... 00:00:04.082 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.088 [WS-CLEANUP] done 00:00:04.250 [Pipeline] setCustomBuildProperty 00:00:04.350 [Pipeline] httpRequest 00:00:04.369 [Pipeline] echo 00:00:04.371 Sorcerer 10.211.164.101 is alive 00:00:04.383 [Pipeline] httpRequest 00:00:04.386 HttpMethod: GET 00:00:04.387 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.387 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.388 Response Code: HTTP/1.1 200 OK 00:00:04.388 Success: Status code 200 is in the accepted range: 200,404 00:00:04.389 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.974 [Pipeline] sh 00:00:05.254 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.267 [Pipeline] httpRequest 00:00:05.287 [Pipeline] echo 00:00:05.288 Sorcerer 10.211.164.101 is alive 00:00:05.295 [Pipeline] httpRequest 00:00:05.298 HttpMethod: GET 00:00:05.298 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:05.299 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:05.307 Response Code: HTTP/1.1 200 OK 00:00:05.307 Success: Status code 200 is in the accepted range: 200,404 00:00:05.308 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:14.046 [Pipeline] sh 00:01:14.334 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:16.880 [Pipeline] sh 00:01:17.160 + git -C spdk log --oneline -n5 00:01:17.160 719d03c6a sock/uring: only register net impl if supported 00:01:17.160 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:17.160 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:17.160 6c7c1f57e accel: add sequence outstanding stat 00:01:17.160 3bc8e6a26 accel: add utility to put task 00:01:17.180 [Pipeline] writeFile 00:01:17.197 [Pipeline] sh 00:01:17.478 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.490 [Pipeline] sh 00:01:17.771 + cat autorun-spdk.conf 00:01:17.771 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.771 SPDK_TEST_NVMF=1 00:01:17.771 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.771 SPDK_TEST_URING=1 00:01:17.771 SPDK_TEST_VFIOUSER=1 00:01:17.771 SPDK_TEST_USDT=1 00:01:17.771 SPDK_RUN_ASAN=1 00:01:17.771 SPDK_RUN_UBSAN=1 00:01:17.771 NET_TYPE=virt 00:01:17.771 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.778 RUN_NIGHTLY=1 00:01:17.781 [Pipeline] } 00:01:17.799 [Pipeline] // stage 00:01:17.824 [Pipeline] stage 00:01:17.827 [Pipeline] { (Run VM) 00:01:17.841 [Pipeline] sh 00:01:18.122 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.122 + echo 'Start stage prepare_nvme.sh' 00:01:18.122 Start stage prepare_nvme.sh 00:01:18.122 + [[ -n 5 ]] 00:01:18.122 + disk_prefix=ex5 00:01:18.122 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:18.122 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:18.122 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:18.122 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.122 ++ SPDK_TEST_NVMF=1 00:01:18.122 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.122 ++ SPDK_TEST_URING=1 00:01:18.122 ++ SPDK_TEST_VFIOUSER=1 00:01:18.122 ++ SPDK_TEST_USDT=1 00:01:18.122 ++ SPDK_RUN_ASAN=1 00:01:18.122 ++ SPDK_RUN_UBSAN=1 00:01:18.122 ++ NET_TYPE=virt 00:01:18.122 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.122 ++ RUN_NIGHTLY=1 00:01:18.122 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.122 + nvme_files=() 00:01:18.122 + declare -A nvme_files 00:01:18.122 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.122 + nvme_files['nvme.img']=5G 00:01:18.122 + nvme_files['nvme-cmb.img']=5G 00:01:18.122 + nvme_files['nvme-multi0.img']=4G 00:01:18.122 + nvme_files['nvme-multi1.img']=4G 00:01:18.122 + nvme_files['nvme-multi2.img']=4G 00:01:18.122 + nvme_files['nvme-openstack.img']=8G 00:01:18.122 + nvme_files['nvme-zns.img']=5G 00:01:18.122 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.122 + (( SPDK_TEST_FTL == 1 )) 00:01:18.122 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.122 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.122 + for nvme in "${!nvme_files[@]}" 00:01:18.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:18.122 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.122 + for nvme in "${!nvme_files[@]}" 00:01:18.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:18.122 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.122 + for nvme in "${!nvme_files[@]}" 00:01:18.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:18.122 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.122 + for nvme in "${!nvme_files[@]}" 00:01:18.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:18.122 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.122 + for nvme in "${!nvme_files[@]}" 00:01:18.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:18.122 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.381 + for nvme in "${!nvme_files[@]}" 00:01:18.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:18.382 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.382 + for nvme in "${!nvme_files[@]}" 00:01:18.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:18.382 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.382 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:18.382 + echo 'End stage prepare_nvme.sh' 00:01:18.382 End stage prepare_nvme.sh 00:01:18.393 [Pipeline] sh 00:01:18.674 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:18.675 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:18.934 00:01:18.934 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:18.934 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:18.934 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.934 HELP=0 00:01:18.934 DRY_RUN=0 00:01:18.934 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:18.934 NVME_DISKS_TYPE=nvme,nvme, 00:01:18.934 NVME_AUTO_CREATE=0 00:01:18.934 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:18.934 NVME_CMB=,, 00:01:18.934 NVME_PMR=,, 00:01:18.934 NVME_ZNS=,, 00:01:18.934 NVME_MS=,, 00:01:18.934 NVME_FDP=,, 00:01:18.934 SPDK_VAGRANT_DISTRO=fedora38 00:01:18.934 SPDK_VAGRANT_VMCPU=10 00:01:18.934 SPDK_VAGRANT_VMRAM=12288 00:01:18.934 SPDK_VAGRANT_PROVIDER=libvirt 00:01:18.934 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:18.934 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:18.934 SPDK_OPENSTACK_NETWORK=0 00:01:18.934 VAGRANT_PACKAGE_BOX=0 00:01:18.934 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:18.934 FORCE_DISTRO=true 00:01:18.934 VAGRANT_BOX_VERSION= 00:01:18.934 EXTRA_VAGRANTFILES= 00:01:18.934 NIC_MODEL=e1000 00:01:18.934 00:01:18.934 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:18.934 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:22.222 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.481 ==> default: Creating image (snapshot of base box volume). 00:01:22.481 ==> default: Creating domain with the following settings... 00:01:22.481 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720990833_71e8d7e578a1332b85f0 00:01:22.481 ==> default: -- Domain type: kvm 00:01:22.481 ==> default: -- Cpus: 10 00:01:22.481 ==> default: -- Feature: acpi 00:01:22.481 ==> default: -- Feature: apic 00:01:22.481 ==> default: -- Feature: pae 00:01:22.481 ==> default: -- Memory: 12288M 00:01:22.481 ==> default: -- Memory Backing: hugepages: 00:01:22.481 ==> default: -- Management MAC: 00:01:22.481 ==> default: -- Loader: 00:01:22.481 ==> default: -- Nvram: 00:01:22.481 ==> default: -- Base box: spdk/fedora38 00:01:22.481 ==> default: -- Storage pool: default 00:01:22.481 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720990833_71e8d7e578a1332b85f0.img (20G) 00:01:22.481 ==> default: -- Volume Cache: default 00:01:22.481 ==> default: -- Kernel: 00:01:22.481 ==> default: -- Initrd: 00:01:22.481 ==> default: -- Graphics Type: vnc 00:01:22.481 ==> default: -- Graphics Port: -1 00:01:22.481 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.481 ==> default: -- Graphics Password: Not defined 00:01:22.481 ==> default: -- Video Type: cirrus 00:01:22.481 ==> default: -- Video VRAM: 9216 00:01:22.481 ==> default: -- Sound Type: 00:01:22.481 ==> default: -- Keymap: en-us 00:01:22.481 ==> default: -- TPM Path: 00:01:22.481 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.481 ==> default: -- Command line args: 00:01:22.481 ==> default: -> value=-device, 00:01:22.481 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:22.481 ==> default: -> value=-drive, 00:01:22.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:22.481 ==> default: -> value=-device, 00:01:22.481 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.481 ==> default: -> value=-device, 00:01:22.481 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:22.481 ==> default: -> value=-drive, 00:01:22.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:22.481 ==> default: -> value=-device, 00:01:22.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.481 ==> default: -> value=-drive, 00:01:22.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:22.481 ==> default: -> value=-device, 00:01:22.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.481 ==> default: -> value=-drive, 00:01:22.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:22.481 ==> default: -> value=-device, 00:01:22.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.481 ==> default: Creating shared folders metadata... 00:01:22.481 ==> default: Starting domain. 00:01:23.859 ==> default: Waiting for domain to get an IP address... 00:01:41.944 ==> default: Waiting for SSH to become available... 00:01:41.944 ==> default: Configuring and enabling network interfaces... 00:01:44.471 default: SSH address: 192.168.121.210:22 00:01:44.471 default: SSH username: vagrant 00:01:44.471 default: SSH auth method: private key 00:01:46.999 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.107 ==> default: Mounting SSHFS shared folder... 00:01:55.676 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.676 ==> default: Checking Mount.. 00:01:57.055 ==> default: Folder Successfully Mounted! 00:01:57.055 ==> default: Running provisioner: file... 00:01:57.992 default: ~/.gitconfig => .gitconfig 00:01:58.252 00:01:58.252 SUCCESS! 00:01:58.252 00:01:58.252 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:58.252 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.252 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:58.252 00:01:58.264 [Pipeline] } 00:01:58.301 [Pipeline] // stage 00:01:58.341 [Pipeline] dir 00:01:58.344 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:58.360 [Pipeline] { 00:01:58.386 [Pipeline] catchError 00:01:58.388 [Pipeline] { 00:01:58.405 [Pipeline] sh 00:01:58.688 + vagrant ssh-config --host vagrant 00:01:58.688 + sed -ne /^Host/,$p 00:01:58.688 + tee ssh_conf 00:02:02.006 Host vagrant 00:02:02.006 HostName 192.168.121.210 00:02:02.006 User vagrant 00:02:02.006 Port 22 00:02:02.006 UserKnownHostsFile /dev/null 00:02:02.006 StrictHostKeyChecking no 00:02:02.006 PasswordAuthentication no 00:02:02.006 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:02.006 IdentitiesOnly yes 00:02:02.006 LogLevel FATAL 00:02:02.006 ForwardAgent yes 00:02:02.006 ForwardX11 yes 00:02:02.006 00:02:02.018 [Pipeline] withEnv 00:02:02.019 [Pipeline] { 00:02:02.031 [Pipeline] sh 00:02:02.309 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:02.309 source /etc/os-release 00:02:02.309 [[ -e /image.version ]] && img=$(< /image.version) 00:02:02.309 # Minimal, systemd-like check. 00:02:02.309 if [[ -e /.dockerenv ]]; then 00:02:02.309 # Clear garbage from the node's name: 00:02:02.309 # agt-er_autotest_547-896 -> autotest_547-896 00:02:02.309 # $HOSTNAME is the actual container id 00:02:02.309 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:02.309 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:02.309 # We can assume this is a mount from a host where container is running, 00:02:02.309 # so fetch its hostname to easily identify the target swarm worker. 00:02:02.309 container="$(< /etc/hostname) ($agent)" 00:02:02.309 else 00:02:02.309 # Fallback 00:02:02.309 container=$agent 00:02:02.309 fi 00:02:02.309 fi 00:02:02.309 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:02.309 00:02:02.577 [Pipeline] } 00:02:02.595 [Pipeline] // withEnv 00:02:02.603 [Pipeline] setCustomBuildProperty 00:02:02.612 [Pipeline] stage 00:02:02.613 [Pipeline] { (Tests) 00:02:02.629 [Pipeline] sh 00:02:02.908 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:03.181 [Pipeline] sh 00:02:03.461 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:03.736 [Pipeline] timeout 00:02:03.736 Timeout set to expire in 30 min 00:02:03.739 [Pipeline] { 00:02:03.758 [Pipeline] sh 00:02:04.042 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:04.609 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:04.622 [Pipeline] sh 00:02:04.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:05.176 [Pipeline] sh 00:02:05.456 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:05.731 [Pipeline] sh 00:02:06.011 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:06.270 ++ readlink -f spdk_repo 00:02:06.271 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.271 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.271 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.271 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.271 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.271 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.271 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.271 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:06.271 + cd /home/vagrant/spdk_repo 00:02:06.271 + source /etc/os-release 00:02:06.271 ++ NAME='Fedora Linux' 00:02:06.271 ++ VERSION='38 (Cloud Edition)' 00:02:06.271 ++ ID=fedora 00:02:06.271 ++ VERSION_ID=38 00:02:06.271 ++ VERSION_CODENAME= 00:02:06.271 ++ PLATFORM_ID=platform:f38 00:02:06.271 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:06.271 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.271 ++ LOGO=fedora-logo-icon 00:02:06.271 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:06.271 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.271 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:06.271 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.271 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.271 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.271 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:06.271 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.271 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:06.271 ++ SUPPORT_END=2024-05-14 00:02:06.271 ++ VARIANT='Cloud Edition' 00:02:06.271 ++ VARIANT_ID=cloud 00:02:06.271 + uname -a 00:02:06.271 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:06.271 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:06.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:06.789 Hugepages 00:02:06.789 node hugesize free / total 00:02:06.789 node0 1048576kB 0 / 0 00:02:06.789 node0 2048kB 0 / 0 00:02:06.789 00:02:06.789 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.789 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.789 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.789 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:06.789 + rm -f /tmp/spdk-ld-path 00:02:06.789 + source autorun-spdk.conf 00:02:06.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.789 ++ SPDK_TEST_NVMF=1 00:02:06.789 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.789 ++ SPDK_TEST_URING=1 00:02:06.789 ++ SPDK_TEST_VFIOUSER=1 00:02:06.789 ++ SPDK_TEST_USDT=1 00:02:06.789 ++ SPDK_RUN_ASAN=1 00:02:06.789 ++ SPDK_RUN_UBSAN=1 00:02:06.789 ++ NET_TYPE=virt 00:02:06.789 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.789 ++ RUN_NIGHTLY=1 00:02:06.789 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.789 + [[ -n '' ]] 00:02:06.789 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.789 + for M in /var/spdk/build-*-manifest.txt 00:02:06.789 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.789 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.789 + for M in /var/spdk/build-*-manifest.txt 00:02:06.789 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.789 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.789 ++ uname 00:02:06.789 + [[ Linux == \L\i\n\u\x ]] 00:02:06.789 + sudo dmesg -T 00:02:06.789 + sudo dmesg --clear 00:02:06.789 + dmesg_pid=5167 00:02:06.790 + [[ Fedora Linux == FreeBSD ]] 00:02:06.790 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.790 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.790 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.790 + sudo dmesg -Tw 00:02:06.790 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.790 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.790 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.790 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.790 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.790 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.790 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.790 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.790 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.790 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.790 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.790 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.790 Test configuration: 00:02:06.790 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.790 SPDK_TEST_NVMF=1 00:02:06.790 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.790 SPDK_TEST_URING=1 00:02:06.790 SPDK_TEST_VFIOUSER=1 00:02:06.790 SPDK_TEST_USDT=1 00:02:06.790 SPDK_RUN_ASAN=1 00:02:06.790 SPDK_RUN_UBSAN=1 00:02:06.790 NET_TYPE=virt 00:02:06.790 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.048 RUN_NIGHTLY=1 21:01:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:07.048 21:01:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.049 21:01:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.049 21:01:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.049 21:01:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.049 21:01:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.049 21:01:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.049 21:01:18 -- paths/export.sh@5 -- $ export PATH 00:02:07.049 21:01:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.049 21:01:18 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:07.049 21:01:18 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:07.049 21:01:18 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720990878.XXXXXX 00:02:07.049 21:01:18 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720990878.c0T9P8 00:02:07.049 21:01:18 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:07.049 21:01:18 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:07.049 21:01:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:07.049 21:01:18 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:07.049 21:01:18 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.049 21:01:18 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:07.049 21:01:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:07.049 21:01:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.049 21:01:18 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:07.049 21:01:18 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:07.049 21:01:18 -- pm/common@17 -- $ local monitor 00:02:07.049 21:01:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.049 21:01:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.049 21:01:18 -- pm/common@25 -- $ sleep 1 00:02:07.049 21:01:18 -- pm/common@21 -- $ date +%s 00:02:07.049 21:01:18 -- pm/common@21 -- $ date +%s 00:02:07.049 21:01:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720990878 00:02:07.049 21:01:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720990878 00:02:07.049 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720990878_collect-vmstat.pm.log 00:02:07.049 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720990878_collect-cpu-load.pm.log 00:02:07.985 21:01:19 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:07.985 21:01:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.985 21:01:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.985 21:01:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:07.985 21:01:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.985 Sun Jul 14 09:01:19 PM UTC 2024 00:02:07.985 21:01:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.985 v24.09-pre-202-g719d03c6a 00:02:07.985 21:01:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:07.985 21:01:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:07.985 21:01:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:07.985 21:01:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.985 21:01:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.985 ************************************ 00:02:07.985 START TEST asan 00:02:07.985 ************************************ 00:02:07.985 using asan 00:02:07.985 21:01:19 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:07.985 00:02:07.985 real 0m0.001s 00:02:07.985 user 0m0.000s 00:02:07.985 sys 0m0.000s 00:02:07.985 21:01:19 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:07.985 ************************************ 00:02:07.985 END TEST asan 00:02:07.985 ************************************ 00:02:07.985 21:01:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.985 21:01:19 -- common/autotest_common.sh@1142 -- $ return 0 00:02:07.985 21:01:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.985 21:01:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.985 21:01:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:07.985 21:01:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.985 21:01:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.244 ************************************ 00:02:08.244 START TEST ubsan 00:02:08.244 ************************************ 00:02:08.244 using ubsan 00:02:08.244 21:01:19 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:08.244 00:02:08.244 real 0m0.000s 00:02:08.244 user 0m0.000s 00:02:08.244 sys 0m0.000s 00:02:08.244 21:01:19 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:08.244 ************************************ 00:02:08.244 END TEST ubsan 00:02:08.244 ************************************ 00:02:08.244 21:01:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.244 21:01:19 -- common/autotest_common.sh@1142 -- $ return 0 00:02:08.244 21:01:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.244 21:01:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.244 21:01:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.244 21:01:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.244 21:01:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.244 21:01:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.244 21:01:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.244 21:01:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.244 21:01:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:08.503 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.503 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.761 Using 'verbs' RDMA provider 00:02:25.012 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:37.212 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:37.212 Creating mk/config.mk...done. 00:02:37.212 Creating mk/cc.flags.mk...done. 00:02:37.212 Type 'make' to build. 00:02:37.212 21:01:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:37.212 21:01:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:37.212 21:01:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:37.212 21:01:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.212 ************************************ 00:02:37.212 START TEST make 00:02:37.212 ************************************ 00:02:37.212 21:01:47 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:37.212 make[1]: Nothing to be done for 'all'. 00:02:37.212 The Meson build system 00:02:37.212 Version: 1.3.1 00:02:37.212 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:37.212 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:37.212 Build type: native build 00:02:37.212 Project name: libvfio-user 00:02:37.212 Project version: 0.0.1 00:02:37.212 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.212 C linker for the host machine: cc ld.bfd 2.39-16 00:02:37.212 Host machine cpu family: x86_64 00:02:37.212 Host machine cpu: x86_64 00:02:37.212 Run-time dependency threads found: YES 00:02:37.212 Library dl found: YES 00:02:37.212 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.212 Run-time dependency json-c found: YES 0.17 00:02:37.212 Run-time dependency cmocka found: YES 1.1.7 00:02:37.212 Program pytest-3 found: NO 00:02:37.212 Program flake8 found: NO 00:02:37.212 Program misspell-fixer found: NO 00:02:37.212 Program restructuredtext-lint found: NO 00:02:37.212 Program valgrind found: YES (/usr/bin/valgrind) 00:02:37.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.212 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.212 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:37.212 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:37.212 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:37.212 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:37.212 Build targets in project: 8 00:02:37.212 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:37.212 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:37.212 00:02:37.212 libvfio-user 0.0.1 00:02:37.212 00:02:37.212 User defined options 00:02:37.212 buildtype : debug 00:02:37.212 default_library: shared 00:02:37.212 libdir : /usr/local/lib 00:02:37.212 00:02:37.212 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.779 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:37.779 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:37.779 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:37.779 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:37.779 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:37.779 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:37.779 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:38.038 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:38.038 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:38.038 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:38.038 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:38.038 [11/37] Compiling C object samples/null.p/null.c.o 00:02:38.038 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:38.038 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:38.038 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:38.038 [15/37] Compiling C object samples/client.p/client.c.o 00:02:38.038 [16/37] Compiling C object samples/server.p/server.c.o 00:02:38.038 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:38.038 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:38.038 [19/37] Linking target samples/client 00:02:38.038 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:38.038 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:38.038 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:38.038 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:38.038 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:38.038 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:38.038 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:38.038 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:38.297 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:38.297 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:38.297 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:38.297 [31/37] Linking target test/unit_tests 00:02:38.297 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:38.297 [33/37] Linking target samples/server 00:02:38.297 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:38.297 [35/37] Linking target samples/gpio-pci-idio-16 00:02:38.297 [36/37] Linking target samples/null 00:02:38.297 [37/37] Linking target samples/lspci 00:02:38.297 INFO: autodetecting backend as ninja 00:02:38.297 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.297 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.864 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:38.864 ninja: no work to do. 00:02:46.975 The Meson build system 00:02:46.975 Version: 1.3.1 00:02:46.975 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:46.975 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:46.975 Build type: native build 00:02:46.975 Program cat found: YES (/usr/bin/cat) 00:02:46.975 Project name: DPDK 00:02:46.975 Project version: 24.03.0 00:02:46.975 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:46.975 C linker for the host machine: cc ld.bfd 2.39-16 00:02:46.975 Host machine cpu family: x86_64 00:02:46.975 Host machine cpu: x86_64 00:02:46.975 Message: ## Building in Developer Mode ## 00:02:46.975 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:46.975 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:46.975 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.975 Program python3 found: YES (/usr/bin/python3) 00:02:46.975 Program cat found: YES (/usr/bin/cat) 00:02:46.975 Compiler for C supports arguments -march=native: YES 00:02:46.975 Checking for size of "void *" : 8 00:02:46.975 Checking for size of "void *" : 8 (cached) 00:02:46.975 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:46.975 Library m found: YES 00:02:46.975 Library numa found: YES 00:02:46.975 Has header "numaif.h" : YES 00:02:46.975 Library fdt found: NO 00:02:46.975 Library execinfo found: NO 00:02:46.975 Has header "execinfo.h" : YES 00:02:46.975 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:46.975 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.975 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.975 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.975 Run-time dependency openssl found: YES 3.0.9 00:02:46.975 Run-time dependency libpcap found: YES 1.10.4 00:02:46.975 Has header "pcap.h" with dependency libpcap: YES 00:02:46.975 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.975 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.975 Compiler for C supports arguments -Wformat: YES 00:02:46.975 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:46.975 Compiler for C supports arguments -Wformat-security: NO 00:02:46.975 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.975 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.975 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.975 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.975 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.975 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.975 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.975 Compiler for C supports arguments -Wundef: YES 00:02:46.975 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.975 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.975 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:46.975 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.975 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:46.975 Program objdump found: YES (/usr/bin/objdump) 00:02:46.975 Compiler for C supports arguments -mavx512f: YES 00:02:46.975 Checking if "AVX512 checking" compiles: YES 00:02:46.975 Fetching value of define "__SSE4_2__" : 1 00:02:46.975 Fetching value of define "__AES__" : 1 00:02:46.975 Fetching value of define "__AVX__" : 1 00:02:46.975 Fetching value of define "__AVX2__" : 1 00:02:46.975 Fetching value of define "__AVX512BW__" : (undefined) 00:02:46.975 Fetching value of define "__AVX512CD__" : (undefined) 00:02:46.975 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:46.975 Fetching value of define "__AVX512F__" : (undefined) 00:02:46.975 Fetching value of define "__AVX512VL__" : (undefined) 00:02:46.975 Fetching value of define "__PCLMUL__" : 1 00:02:46.975 Fetching value of define "__RDRND__" : 1 00:02:46.975 Fetching value of define "__RDSEED__" : 1 00:02:46.975 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:46.975 Fetching value of define "__znver1__" : (undefined) 00:02:46.975 Fetching value of define "__znver2__" : (undefined) 00:02:46.975 Fetching value of define "__znver3__" : (undefined) 00:02:46.975 Fetching value of define "__znver4__" : (undefined) 00:02:46.975 Library asan found: YES 00:02:46.975 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:46.975 Message: lib/log: Defining dependency "log" 00:02:46.975 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.975 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.975 Library rt found: YES 00:02:46.975 Checking for function "getentropy" : NO 00:02:46.975 Message: lib/eal: Defining dependency "eal" 00:02:46.975 Message: lib/ring: Defining dependency "ring" 00:02:46.975 Message: lib/rcu: Defining dependency "rcu" 00:02:46.975 Message: lib/mempool: Defining dependency "mempool" 00:02:46.975 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.976 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.976 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:46.976 Compiler for C supports arguments -mpclmul: YES 00:02:46.976 Compiler for C supports arguments -maes: YES 00:02:46.976 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.976 Compiler for C supports arguments -mavx512bw: YES 00:02:46.976 Compiler for C supports arguments -mavx512dq: YES 00:02:46.976 Compiler for C supports arguments -mavx512vl: YES 00:02:46.976 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.976 Compiler for C supports arguments -mavx2: YES 00:02:46.976 Compiler for C supports arguments -mavx: YES 00:02:46.976 Message: lib/net: Defining dependency "net" 00:02:46.976 Message: lib/meter: Defining dependency "meter" 00:02:46.976 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.976 Message: lib/pci: Defining dependency "pci" 00:02:46.976 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.976 Message: lib/hash: Defining dependency "hash" 00:02:46.976 Message: lib/timer: Defining dependency "timer" 00:02:46.976 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.976 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.976 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.976 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.976 Message: lib/power: Defining dependency "power" 00:02:46.976 Message: lib/reorder: Defining dependency "reorder" 00:02:46.976 Message: lib/security: Defining dependency "security" 00:02:46.976 Has header "linux/userfaultfd.h" : YES 00:02:46.976 Has header "linux/vduse.h" : YES 00:02:46.976 Message: lib/vhost: Defining dependency "vhost" 00:02:46.976 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:46.976 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.976 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.976 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.976 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:46.976 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:46.976 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:46.976 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:46.976 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:46.976 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:46.976 Program doxygen found: YES (/usr/bin/doxygen) 00:02:46.976 Configuring doxy-api-html.conf using configuration 00:02:46.976 Configuring doxy-api-man.conf using configuration 00:02:46.976 Program mandb found: YES (/usr/bin/mandb) 00:02:46.976 Program sphinx-build found: NO 00:02:46.976 Configuring rte_build_config.h using configuration 00:02:46.976 Message: 00:02:46.976 ================= 00:02:46.976 Applications Enabled 00:02:46.976 ================= 00:02:46.976 00:02:46.976 apps: 00:02:46.976 00:02:46.976 00:02:46.976 Message: 00:02:46.976 ================= 00:02:46.976 Libraries Enabled 00:02:46.976 ================= 00:02:46.976 00:02:46.976 libs: 00:02:46.976 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:46.976 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:46.976 cryptodev, dmadev, power, reorder, security, vhost, 00:02:46.976 00:02:46.976 Message: 00:02:46.976 =============== 00:02:46.976 Drivers Enabled 00:02:46.976 =============== 00:02:46.976 00:02:46.976 common: 00:02:46.976 00:02:46.976 bus: 00:02:46.976 pci, vdev, 00:02:46.976 mempool: 00:02:46.976 ring, 00:02:46.976 dma: 00:02:46.976 00:02:46.976 net: 00:02:46.976 00:02:46.976 crypto: 00:02:46.976 00:02:46.976 compress: 00:02:46.976 00:02:46.976 vdpa: 00:02:46.976 00:02:46.976 00:02:46.976 Message: 00:02:46.976 ================= 00:02:46.976 Content Skipped 00:02:46.976 ================= 00:02:46.976 00:02:46.976 apps: 00:02:46.976 dumpcap: explicitly disabled via build config 00:02:46.976 graph: explicitly disabled via build config 00:02:46.976 pdump: explicitly disabled via build config 00:02:46.976 proc-info: explicitly disabled via build config 00:02:46.976 test-acl: explicitly disabled via build config 00:02:46.976 test-bbdev: explicitly disabled via build config 00:02:46.976 test-cmdline: explicitly disabled via build config 00:02:46.976 test-compress-perf: explicitly disabled via build config 00:02:46.976 test-crypto-perf: explicitly disabled via build config 00:02:46.976 test-dma-perf: explicitly disabled via build config 00:02:46.976 test-eventdev: explicitly disabled via build config 00:02:46.976 test-fib: explicitly disabled via build config 00:02:46.976 test-flow-perf: explicitly disabled via build config 00:02:46.976 test-gpudev: explicitly disabled via build config 00:02:46.976 test-mldev: explicitly disabled via build config 00:02:46.976 test-pipeline: explicitly disabled via build config 00:02:46.976 test-pmd: explicitly disabled via build config 00:02:46.976 test-regex: explicitly disabled via build config 00:02:46.976 test-sad: explicitly disabled via build config 00:02:46.976 test-security-perf: explicitly disabled via build config 00:02:46.976 00:02:46.976 libs: 00:02:46.976 argparse: explicitly disabled via build config 00:02:46.976 metrics: explicitly disabled via build config 00:02:46.976 acl: explicitly disabled via build config 00:02:46.976 bbdev: explicitly disabled via build config 00:02:46.976 bitratestats: explicitly disabled via build config 00:02:46.976 bpf: explicitly disabled via build config 00:02:46.976 cfgfile: explicitly disabled via build config 00:02:46.976 distributor: explicitly disabled via build config 00:02:46.976 efd: explicitly disabled via build config 00:02:46.976 eventdev: explicitly disabled via build config 00:02:46.976 dispatcher: explicitly disabled via build config 00:02:46.976 gpudev: explicitly disabled via build config 00:02:46.976 gro: explicitly disabled via build config 00:02:46.976 gso: explicitly disabled via build config 00:02:46.976 ip_frag: explicitly disabled via build config 00:02:46.976 jobstats: explicitly disabled via build config 00:02:46.976 latencystats: explicitly disabled via build config 00:02:46.976 lpm: explicitly disabled via build config 00:02:46.976 member: explicitly disabled via build config 00:02:46.976 pcapng: explicitly disabled via build config 00:02:46.976 rawdev: explicitly disabled via build config 00:02:46.976 regexdev: explicitly disabled via build config 00:02:46.976 mldev: explicitly disabled via build config 00:02:46.976 rib: explicitly disabled via build config 00:02:46.976 sched: explicitly disabled via build config 00:02:46.976 stack: explicitly disabled via build config 00:02:46.976 ipsec: explicitly disabled via build config 00:02:46.976 pdcp: explicitly disabled via build config 00:02:46.976 fib: explicitly disabled via build config 00:02:46.976 port: explicitly disabled via build config 00:02:46.976 pdump: explicitly disabled via build config 00:02:46.976 table: explicitly disabled via build config 00:02:46.976 pipeline: explicitly disabled via build config 00:02:46.976 graph: explicitly disabled via build config 00:02:46.976 node: explicitly disabled via build config 00:02:46.976 00:02:46.976 drivers: 00:02:46.976 common/cpt: not in enabled drivers build config 00:02:46.976 common/dpaax: not in enabled drivers build config 00:02:46.976 common/iavf: not in enabled drivers build config 00:02:46.976 common/idpf: not in enabled drivers build config 00:02:46.976 common/ionic: not in enabled drivers build config 00:02:46.976 common/mvep: not in enabled drivers build config 00:02:46.976 common/octeontx: not in enabled drivers build config 00:02:46.976 bus/auxiliary: not in enabled drivers build config 00:02:46.976 bus/cdx: not in enabled drivers build config 00:02:46.976 bus/dpaa: not in enabled drivers build config 00:02:46.976 bus/fslmc: not in enabled drivers build config 00:02:46.976 bus/ifpga: not in enabled drivers build config 00:02:46.976 bus/platform: not in enabled drivers build config 00:02:46.976 bus/uacce: not in enabled drivers build config 00:02:46.976 bus/vmbus: not in enabled drivers build config 00:02:46.976 common/cnxk: not in enabled drivers build config 00:02:46.976 common/mlx5: not in enabled drivers build config 00:02:46.976 common/nfp: not in enabled drivers build config 00:02:46.976 common/nitrox: not in enabled drivers build config 00:02:46.976 common/qat: not in enabled drivers build config 00:02:46.976 common/sfc_efx: not in enabled drivers build config 00:02:46.976 mempool/bucket: not in enabled drivers build config 00:02:46.976 mempool/cnxk: not in enabled drivers build config 00:02:46.976 mempool/dpaa: not in enabled drivers build config 00:02:46.976 mempool/dpaa2: not in enabled drivers build config 00:02:46.976 mempool/octeontx: not in enabled drivers build config 00:02:46.976 mempool/stack: not in enabled drivers build config 00:02:46.976 dma/cnxk: not in enabled drivers build config 00:02:46.976 dma/dpaa: not in enabled drivers build config 00:02:46.976 dma/dpaa2: not in enabled drivers build config 00:02:46.976 dma/hisilicon: not in enabled drivers build config 00:02:46.976 dma/idxd: not in enabled drivers build config 00:02:46.976 dma/ioat: not in enabled drivers build config 00:02:46.976 dma/skeleton: not in enabled drivers build config 00:02:46.976 net/af_packet: not in enabled drivers build config 00:02:46.976 net/af_xdp: not in enabled drivers build config 00:02:46.976 net/ark: not in enabled drivers build config 00:02:46.976 net/atlantic: not in enabled drivers build config 00:02:46.976 net/avp: not in enabled drivers build config 00:02:46.976 net/axgbe: not in enabled drivers build config 00:02:46.976 net/bnx2x: not in enabled drivers build config 00:02:46.976 net/bnxt: not in enabled drivers build config 00:02:46.976 net/bonding: not in enabled drivers build config 00:02:46.976 net/cnxk: not in enabled drivers build config 00:02:46.976 net/cpfl: not in enabled drivers build config 00:02:46.976 net/cxgbe: not in enabled drivers build config 00:02:46.976 net/dpaa: not in enabled drivers build config 00:02:46.976 net/dpaa2: not in enabled drivers build config 00:02:46.976 net/e1000: not in enabled drivers build config 00:02:46.976 net/ena: not in enabled drivers build config 00:02:46.976 net/enetc: not in enabled drivers build config 00:02:46.976 net/enetfec: not in enabled drivers build config 00:02:46.976 net/enic: not in enabled drivers build config 00:02:46.976 net/failsafe: not in enabled drivers build config 00:02:46.976 net/fm10k: not in enabled drivers build config 00:02:46.976 net/gve: not in enabled drivers build config 00:02:46.976 net/hinic: not in enabled drivers build config 00:02:46.976 net/hns3: not in enabled drivers build config 00:02:46.976 net/i40e: not in enabled drivers build config 00:02:46.976 net/iavf: not in enabled drivers build config 00:02:46.976 net/ice: not in enabled drivers build config 00:02:46.976 net/idpf: not in enabled drivers build config 00:02:46.977 net/igc: not in enabled drivers build config 00:02:46.977 net/ionic: not in enabled drivers build config 00:02:46.977 net/ipn3ke: not in enabled drivers build config 00:02:46.977 net/ixgbe: not in enabled drivers build config 00:02:46.977 net/mana: not in enabled drivers build config 00:02:46.977 net/memif: not in enabled drivers build config 00:02:46.977 net/mlx4: not in enabled drivers build config 00:02:46.977 net/mlx5: not in enabled drivers build config 00:02:46.977 net/mvneta: not in enabled drivers build config 00:02:46.977 net/mvpp2: not in enabled drivers build config 00:02:46.977 net/netvsc: not in enabled drivers build config 00:02:46.977 net/nfb: not in enabled drivers build config 00:02:46.977 net/nfp: not in enabled drivers build config 00:02:46.977 net/ngbe: not in enabled drivers build config 00:02:46.977 net/null: not in enabled drivers build config 00:02:46.977 net/octeontx: not in enabled drivers build config 00:02:46.977 net/octeon_ep: not in enabled drivers build config 00:02:46.977 net/pcap: not in enabled drivers build config 00:02:46.977 net/pfe: not in enabled drivers build config 00:02:46.977 net/qede: not in enabled drivers build config 00:02:46.977 net/ring: not in enabled drivers build config 00:02:46.977 net/sfc: not in enabled drivers build config 00:02:46.977 net/softnic: not in enabled drivers build config 00:02:46.977 net/tap: not in enabled drivers build config 00:02:46.977 net/thunderx: not in enabled drivers build config 00:02:46.977 net/txgbe: not in enabled drivers build config 00:02:46.977 net/vdev_netvsc: not in enabled drivers build config 00:02:46.977 net/vhost: not in enabled drivers build config 00:02:46.977 net/virtio: not in enabled drivers build config 00:02:46.977 net/vmxnet3: not in enabled drivers build config 00:02:46.977 raw/*: missing internal dependency, "rawdev" 00:02:46.977 crypto/armv8: not in enabled drivers build config 00:02:46.977 crypto/bcmfs: not in enabled drivers build config 00:02:46.977 crypto/caam_jr: not in enabled drivers build config 00:02:46.977 crypto/ccp: not in enabled drivers build config 00:02:46.977 crypto/cnxk: not in enabled drivers build config 00:02:46.977 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.977 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.977 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.977 crypto/mlx5: not in enabled drivers build config 00:02:46.977 crypto/mvsam: not in enabled drivers build config 00:02:46.977 crypto/nitrox: not in enabled drivers build config 00:02:46.977 crypto/null: not in enabled drivers build config 00:02:46.977 crypto/octeontx: not in enabled drivers build config 00:02:46.977 crypto/openssl: not in enabled drivers build config 00:02:46.977 crypto/scheduler: not in enabled drivers build config 00:02:46.977 crypto/uadk: not in enabled drivers build config 00:02:46.977 crypto/virtio: not in enabled drivers build config 00:02:46.977 compress/isal: not in enabled drivers build config 00:02:46.977 compress/mlx5: not in enabled drivers build config 00:02:46.977 compress/nitrox: not in enabled drivers build config 00:02:46.977 compress/octeontx: not in enabled drivers build config 00:02:46.977 compress/zlib: not in enabled drivers build config 00:02:46.977 regex/*: missing internal dependency, "regexdev" 00:02:46.977 ml/*: missing internal dependency, "mldev" 00:02:46.977 vdpa/ifc: not in enabled drivers build config 00:02:46.977 vdpa/mlx5: not in enabled drivers build config 00:02:46.977 vdpa/nfp: not in enabled drivers build config 00:02:46.977 vdpa/sfc: not in enabled drivers build config 00:02:46.977 event/*: missing internal dependency, "eventdev" 00:02:46.977 baseband/*: missing internal dependency, "bbdev" 00:02:46.977 gpu/*: missing internal dependency, "gpudev" 00:02:46.977 00:02:46.977 00:02:46.977 Build targets in project: 85 00:02:46.977 00:02:46.977 DPDK 24.03.0 00:02:46.977 00:02:46.977 User defined options 00:02:46.977 buildtype : debug 00:02:46.977 default_library : shared 00:02:46.977 libdir : lib 00:02:46.977 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.977 b_sanitize : address 00:02:46.977 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:46.977 c_link_args : 00:02:46.977 cpu_instruction_set: native 00:02:46.977 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.977 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.977 enable_docs : false 00:02:46.977 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:46.977 enable_kmods : false 00:02:46.977 max_lcores : 128 00:02:46.977 tests : false 00:02:46.977 00:02:46.977 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:47.236 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:47.236 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:47.236 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:47.236 [3/268] Linking static target lib/librte_log.a 00:02:47.236 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:47.236 [5/268] Linking static target lib/librte_kvargs.a 00:02:47.236 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.803 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.803 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.803 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.061 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.061 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.061 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.061 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.061 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.061 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.061 [16/268] Linking static target lib/librte_telemetry.a 00:02:48.319 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.319 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.319 [19/268] Linking target lib/librte_log.so.24.1 00:02:48.319 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.578 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:48.578 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:48.836 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:48.836 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.094 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.094 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.094 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.094 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.094 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.094 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.094 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:49.094 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.094 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.353 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.353 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.353 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.353 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.918 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.918 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.918 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.176 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.176 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.176 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.176 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.176 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.176 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.434 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.434 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.434 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.691 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.691 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.691 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.949 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.206 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.206 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.206 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.465 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.465 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.465 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.465 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.465 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.465 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.465 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.030 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.030 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:52.030 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:52.030 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:52.030 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.592 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.592 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.592 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.592 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.592 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.592 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.850 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.850 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.850 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.108 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:53.366 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.366 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:53.366 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.366 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.624 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.624 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.624 [85/268] Linking static target lib/librte_eal.a 00:02:53.882 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.882 [87/268] Linking static target lib/librte_ring.a 00:02:54.140 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:54.140 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:54.140 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:54.140 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:54.398 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:54.398 [93/268] Linking static target lib/librte_rcu.a 00:02:54.398 [94/268] Linking static target lib/librte_mempool.a 00:02:54.398 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.398 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:54.655 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:54.913 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.913 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:54.913 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:55.170 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:55.170 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:55.428 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:55.428 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:55.685 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:55.685 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:55.685 [107/268] Linking static target lib/librte_net.a 00:02:55.685 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.685 [109/268] Linking static target lib/librte_meter.a 00:02:55.685 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.943 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:55.943 [112/268] Linking static target lib/librte_mbuf.a 00:02:56.200 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.200 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.200 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:56.200 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:56.458 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:56.716 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:56.973 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.973 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.973 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.973 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.231 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:57.795 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.795 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:57.795 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.796 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:57.796 [128/268] Linking static target lib/librte_pci.a 00:02:57.796 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:57.796 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.796 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.796 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.053 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.053 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:58.053 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:58.053 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.053 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:58.053 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:58.053 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:58.053 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:58.310 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:58.310 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.310 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.310 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:58.310 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.310 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.570 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.570 [148/268] Linking static target lib/librte_cmdline.a 00:02:58.844 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.844 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:59.119 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:59.119 [152/268] Linking static target lib/librte_timer.a 00:02:59.119 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.119 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:59.119 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:59.376 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.376 [157/268] Linking static target lib/librte_ethdev.a 00:02:59.632 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:59.632 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:59.632 [160/268] Linking static target lib/librte_hash.a 00:02:59.632 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:59.632 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.890 [163/268] Linking static target lib/librte_compressdev.a 00:02:59.890 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:59.890 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.149 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.149 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.149 [168/268] Linking static target lib/librte_dmadev.a 00:03:00.149 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.407 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.407 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:00.407 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.665 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.923 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.923 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.923 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.923 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:01.181 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.181 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.181 [180/268] Linking static target lib/librte_cryptodev.a 00:03:01.181 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.181 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.181 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.440 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:01.698 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.698 [186/268] Linking static target lib/librte_reorder.a 00:03:01.955 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.955 [188/268] Linking static target lib/librte_power.a 00:03:01.955 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:02.211 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.211 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.211 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.211 [193/268] Linking static target lib/librte_security.a 00:03:02.470 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.728 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:02.986 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.986 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.986 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.986 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.244 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.244 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.502 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.761 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:03.761 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:03.761 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:03.761 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.019 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.019 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.019 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.278 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.278 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.278 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.278 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.278 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.278 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.278 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:04.278 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.278 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.538 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:04.538 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.538 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.538 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.797 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.797 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.797 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.797 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:04.797 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.365 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.623 [229/268] Linking target lib/librte_eal.so.24.1 00:03:05.623 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:05.881 [231/268] Linking target lib/librte_timer.so.24.1 00:03:05.881 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:05.881 [233/268] Linking target lib/librte_ring.so.24.1 00:03:05.881 [234/268] Linking target lib/librte_meter.so.24.1 00:03:05.881 [235/268] Linking target lib/librte_pci.so.24.1 00:03:05.881 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:05.881 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.881 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:05.881 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:05.881 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:05.881 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:05.881 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:05.881 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:05.881 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:05.881 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:06.140 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:06.140 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:06.140 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:06.140 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:06.399 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:06.399 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:06.399 [252/268] Linking target lib/librte_net.so.24.1 00:03:06.399 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:06.399 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:06.658 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:06.658 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:06.658 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:06.658 [258/268] Linking target lib/librte_hash.so.24.1 00:03:06.658 [259/268] Linking target lib/librte_security.so.24.1 00:03:06.658 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:07.225 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.225 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:07.484 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:07.484 [264/268] Linking target lib/librte_power.so.24.1 00:03:10.016 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:10.016 [266/268] Linking static target lib/librte_vhost.a 00:03:11.922 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.922 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:11.922 INFO: autodetecting backend as ninja 00:03:11.922 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:12.859 CC lib/ut_mock/mock.o 00:03:12.859 CC lib/log/log.o 00:03:12.859 CC lib/ut/ut.o 00:03:12.859 CC lib/log/log_deprecated.o 00:03:12.859 CC lib/log/log_flags.o 00:03:13.116 LIB libspdk_ut.a 00:03:13.116 LIB libspdk_log.a 00:03:13.116 SO libspdk_ut.so.2.0 00:03:13.116 LIB libspdk_ut_mock.a 00:03:13.116 SO libspdk_log.so.7.0 00:03:13.116 SO libspdk_ut_mock.so.6.0 00:03:13.116 SYMLINK libspdk_ut.so 00:03:13.116 SYMLINK libspdk_ut_mock.so 00:03:13.116 SYMLINK libspdk_log.so 00:03:13.378 CC lib/ioat/ioat.o 00:03:13.378 CC lib/dma/dma.o 00:03:13.378 CXX lib/trace_parser/trace.o 00:03:13.378 CC lib/util/base64.o 00:03:13.378 CC lib/util/bit_array.o 00:03:13.378 CC lib/util/cpuset.o 00:03:13.378 CC lib/util/crc32.o 00:03:13.378 CC lib/util/crc16.o 00:03:13.378 CC lib/util/crc32c.o 00:03:13.643 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.643 CC lib/util/crc32_ieee.o 00:03:13.643 CC lib/util/crc64.o 00:03:13.643 CC lib/util/dif.o 00:03:13.643 CC lib/util/fd.o 00:03:13.643 LIB libspdk_dma.a 00:03:13.643 SO libspdk_dma.so.4.0 00:03:13.643 CC lib/vfio_user/host/vfio_user.o 00:03:13.643 CC lib/util/file.o 00:03:13.643 CC lib/util/hexlify.o 00:03:13.643 CC lib/util/iov.o 00:03:13.901 SYMLINK libspdk_dma.so 00:03:13.901 CC lib/util/math.o 00:03:13.901 CC lib/util/pipe.o 00:03:13.901 LIB libspdk_ioat.a 00:03:13.901 CC lib/util/strerror_tls.o 00:03:13.901 SO libspdk_ioat.so.7.0 00:03:13.901 CC lib/util/string.o 00:03:13.901 CC lib/util/uuid.o 00:03:13.901 SYMLINK libspdk_ioat.so 00:03:13.901 LIB libspdk_vfio_user.a 00:03:13.901 CC lib/util/fd_group.o 00:03:13.901 CC lib/util/xor.o 00:03:13.901 CC lib/util/zipf.o 00:03:13.901 SO libspdk_vfio_user.so.5.0 00:03:14.159 SYMLINK libspdk_vfio_user.so 00:03:14.417 LIB libspdk_util.a 00:03:14.417 SO libspdk_util.so.9.1 00:03:14.676 LIB libspdk_trace_parser.a 00:03:14.676 SYMLINK libspdk_util.so 00:03:14.676 SO libspdk_trace_parser.so.5.0 00:03:14.676 SYMLINK libspdk_trace_parser.so 00:03:14.934 CC lib/conf/conf.o 00:03:14.934 CC lib/idxd/idxd.o 00:03:14.934 CC lib/idxd/idxd_user.o 00:03:14.934 CC lib/idxd/idxd_kernel.o 00:03:14.934 CC lib/rdma_utils/rdma_utils.o 00:03:14.934 CC lib/json/json_parse.o 00:03:14.934 CC lib/vmd/vmd.o 00:03:14.934 CC lib/rdma_provider/common.o 00:03:14.934 CC lib/json/json_util.o 00:03:14.934 CC lib/env_dpdk/env.o 00:03:14.934 CC lib/json/json_write.o 00:03:14.934 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:15.192 LIB libspdk_conf.a 00:03:15.192 CC lib/vmd/led.o 00:03:15.192 CC lib/env_dpdk/memory.o 00:03:15.192 SO libspdk_conf.so.6.0 00:03:15.192 CC lib/env_dpdk/pci.o 00:03:15.192 LIB libspdk_rdma_utils.a 00:03:15.192 SO libspdk_rdma_utils.so.1.0 00:03:15.192 SYMLINK libspdk_conf.so 00:03:15.192 CC lib/env_dpdk/init.o 00:03:15.192 SYMLINK libspdk_rdma_utils.so 00:03:15.192 CC lib/env_dpdk/threads.o 00:03:15.192 LIB libspdk_rdma_provider.a 00:03:15.192 CC lib/env_dpdk/pci_ioat.o 00:03:15.192 SO libspdk_rdma_provider.so.6.0 00:03:15.449 LIB libspdk_json.a 00:03:15.449 SYMLINK libspdk_rdma_provider.so 00:03:15.449 CC lib/env_dpdk/pci_virtio.o 00:03:15.449 SO libspdk_json.so.6.0 00:03:15.449 CC lib/env_dpdk/pci_vmd.o 00:03:15.449 CC lib/env_dpdk/pci_idxd.o 00:03:15.449 SYMLINK libspdk_json.so 00:03:15.449 CC lib/env_dpdk/pci_event.o 00:03:15.449 CC lib/env_dpdk/sigbus_handler.o 00:03:15.449 CC lib/env_dpdk/pci_dpdk.o 00:03:15.449 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.707 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.707 LIB libspdk_idxd.a 00:03:15.707 SO libspdk_idxd.so.12.0 00:03:15.707 LIB libspdk_vmd.a 00:03:15.707 SO libspdk_vmd.so.6.0 00:03:15.707 SYMLINK libspdk_idxd.so 00:03:15.707 SYMLINK libspdk_vmd.so 00:03:15.707 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.707 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.707 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.707 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.272 LIB libspdk_jsonrpc.a 00:03:16.272 SO libspdk_jsonrpc.so.6.0 00:03:16.272 SYMLINK libspdk_jsonrpc.so 00:03:16.530 CC lib/rpc/rpc.o 00:03:16.788 LIB libspdk_env_dpdk.a 00:03:16.788 LIB libspdk_rpc.a 00:03:16.788 SO libspdk_rpc.so.6.0 00:03:16.788 SO libspdk_env_dpdk.so.14.1 00:03:16.788 SYMLINK libspdk_rpc.so 00:03:17.045 SYMLINK libspdk_env_dpdk.so 00:03:17.045 CC lib/trace/trace.o 00:03:17.045 CC lib/trace/trace_flags.o 00:03:17.045 CC lib/trace/trace_rpc.o 00:03:17.045 CC lib/notify/notify.o 00:03:17.045 CC lib/keyring/keyring_rpc.o 00:03:17.045 CC lib/keyring/keyring.o 00:03:17.045 CC lib/notify/notify_rpc.o 00:03:17.302 LIB libspdk_notify.a 00:03:17.302 LIB libspdk_keyring.a 00:03:17.302 SO libspdk_notify.so.6.0 00:03:17.302 SO libspdk_keyring.so.1.0 00:03:17.302 LIB libspdk_trace.a 00:03:17.302 SYMLINK libspdk_notify.so 00:03:17.560 SYMLINK libspdk_keyring.so 00:03:17.560 SO libspdk_trace.so.10.0 00:03:17.560 SYMLINK libspdk_trace.so 00:03:17.817 CC lib/sock/sock.o 00:03:17.817 CC lib/sock/sock_rpc.o 00:03:17.817 CC lib/thread/thread.o 00:03:17.817 CC lib/thread/iobuf.o 00:03:18.384 LIB libspdk_sock.a 00:03:18.384 SO libspdk_sock.so.10.0 00:03:18.384 SYMLINK libspdk_sock.so 00:03:18.643 CC lib/nvme/nvme_ctrlr.o 00:03:18.643 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.643 CC lib/nvme/nvme_ns_cmd.o 00:03:18.643 CC lib/nvme/nvme_fabric.o 00:03:18.643 CC lib/nvme/nvme_ns.o 00:03:18.643 CC lib/nvme/nvme_pcie_common.o 00:03:18.643 CC lib/nvme/nvme_pcie.o 00:03:18.643 CC lib/nvme/nvme.o 00:03:18.643 CC lib/nvme/nvme_qpair.o 00:03:19.577 CC lib/nvme/nvme_quirks.o 00:03:19.577 CC lib/nvme/nvme_transport.o 00:03:19.577 CC lib/nvme/nvme_discovery.o 00:03:19.577 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.836 LIB libspdk_thread.a 00:03:19.836 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.836 SO libspdk_thread.so.10.1 00:03:19.836 CC lib/nvme/nvme_tcp.o 00:03:19.836 CC lib/nvme/nvme_opal.o 00:03:19.836 SYMLINK libspdk_thread.so 00:03:19.836 CC lib/nvme/nvme_io_msg.o 00:03:20.095 CC lib/nvme/nvme_poll_group.o 00:03:20.095 CC lib/nvme/nvme_zns.o 00:03:20.353 CC lib/nvme/nvme_stubs.o 00:03:20.353 CC lib/nvme/nvme_auth.o 00:03:20.353 CC lib/nvme/nvme_cuse.o 00:03:20.353 CC lib/nvme/nvme_vfio_user.o 00:03:20.612 CC lib/nvme/nvme_rdma.o 00:03:20.612 CC lib/accel/accel.o 00:03:20.871 CC lib/accel/accel_rpc.o 00:03:20.871 CC lib/blob/blobstore.o 00:03:20.871 CC lib/blob/request.o 00:03:21.145 CC lib/blob/zeroes.o 00:03:21.145 CC lib/accel/accel_sw.o 00:03:21.404 CC lib/blob/blob_bs_dev.o 00:03:21.661 CC lib/init/json_config.o 00:03:21.662 CC lib/init/subsystem.o 00:03:21.662 CC lib/virtio/virtio.o 00:03:21.662 CC lib/init/subsystem_rpc.o 00:03:21.662 CC lib/init/rpc.o 00:03:21.662 CC lib/vfu_tgt/tgt_endpoint.o 00:03:21.662 CC lib/vfu_tgt/tgt_rpc.o 00:03:21.662 CC lib/virtio/virtio_vhost_user.o 00:03:21.662 CC lib/virtio/virtio_vfio_user.o 00:03:21.919 CC lib/virtio/virtio_pci.o 00:03:21.919 LIB libspdk_init.a 00:03:21.919 SO libspdk_init.so.5.0 00:03:21.919 SYMLINK libspdk_init.so 00:03:21.919 LIB libspdk_accel.a 00:03:21.919 SO libspdk_accel.so.15.1 00:03:22.177 LIB libspdk_vfu_tgt.a 00:03:22.177 SO libspdk_vfu_tgt.so.3.0 00:03:22.177 SYMLINK libspdk_accel.so 00:03:22.177 CC lib/event/reactor.o 00:03:22.177 CC lib/event/log_rpc.o 00:03:22.177 CC lib/event/app.o 00:03:22.177 CC lib/event/scheduler_static.o 00:03:22.177 CC lib/event/app_rpc.o 00:03:22.177 LIB libspdk_virtio.a 00:03:22.177 SYMLINK libspdk_vfu_tgt.so 00:03:22.177 SO libspdk_virtio.so.7.0 00:03:22.177 LIB libspdk_nvme.a 00:03:22.177 SYMLINK libspdk_virtio.so 00:03:22.435 CC lib/bdev/bdev.o 00:03:22.435 CC lib/bdev/bdev_zone.o 00:03:22.435 CC lib/bdev/bdev_rpc.o 00:03:22.435 CC lib/bdev/part.o 00:03:22.435 CC lib/bdev/scsi_nvme.o 00:03:22.435 SO libspdk_nvme.so.13.1 00:03:22.694 LIB libspdk_event.a 00:03:22.694 SO libspdk_event.so.14.0 00:03:22.953 SYMLINK libspdk_nvme.so 00:03:22.953 SYMLINK libspdk_event.so 00:03:24.858 LIB libspdk_blob.a 00:03:24.858 SO libspdk_blob.so.11.0 00:03:24.858 SYMLINK libspdk_blob.so 00:03:25.117 CC lib/lvol/lvol.o 00:03:25.117 CC lib/blobfs/tree.o 00:03:25.117 CC lib/blobfs/blobfs.o 00:03:25.376 LIB libspdk_bdev.a 00:03:25.635 SO libspdk_bdev.so.15.1 00:03:25.635 SYMLINK libspdk_bdev.so 00:03:25.894 CC lib/nbd/nbd.o 00:03:25.894 CC lib/nbd/nbd_rpc.o 00:03:25.894 CC lib/nvmf/ctrlr.o 00:03:25.894 CC lib/ftl/ftl_core.o 00:03:25.894 CC lib/nvmf/ctrlr_discovery.o 00:03:25.894 CC lib/nvmf/ctrlr_bdev.o 00:03:25.894 CC lib/ublk/ublk.o 00:03:25.894 CC lib/scsi/dev.o 00:03:26.153 CC lib/ublk/ublk_rpc.o 00:03:26.153 CC lib/scsi/lun.o 00:03:26.411 LIB libspdk_blobfs.a 00:03:26.411 CC lib/scsi/port.o 00:03:26.412 SO libspdk_blobfs.so.10.0 00:03:26.412 LIB libspdk_lvol.a 00:03:26.412 SO libspdk_lvol.so.10.0 00:03:26.412 SYMLINK libspdk_blobfs.so 00:03:26.412 CC lib/scsi/scsi.o 00:03:26.412 LIB libspdk_nbd.a 00:03:26.412 CC lib/ftl/ftl_init.o 00:03:26.412 CC lib/nvmf/subsystem.o 00:03:26.670 SO libspdk_nbd.so.7.0 00:03:26.670 SYMLINK libspdk_lvol.so 00:03:26.670 CC lib/scsi/scsi_bdev.o 00:03:26.670 CC lib/ftl/ftl_layout.o 00:03:26.670 SYMLINK libspdk_nbd.so 00:03:26.670 CC lib/ftl/ftl_debug.o 00:03:26.670 CC lib/scsi/scsi_pr.o 00:03:26.670 CC lib/scsi/scsi_rpc.o 00:03:26.670 CC lib/scsi/task.o 00:03:26.670 LIB libspdk_ublk.a 00:03:26.928 CC lib/nvmf/nvmf.o 00:03:26.928 CC lib/ftl/ftl_io.o 00:03:26.928 SO libspdk_ublk.so.3.0 00:03:26.928 CC lib/ftl/ftl_sb.o 00:03:26.928 SYMLINK libspdk_ublk.so 00:03:26.928 CC lib/ftl/ftl_l2p.o 00:03:26.928 CC lib/ftl/ftl_l2p_flat.o 00:03:26.928 CC lib/ftl/ftl_nv_cache.o 00:03:26.928 CC lib/nvmf/nvmf_rpc.o 00:03:27.187 CC lib/nvmf/transport.o 00:03:27.187 CC lib/ftl/ftl_band.o 00:03:27.187 CC lib/ftl/ftl_band_ops.o 00:03:27.187 LIB libspdk_scsi.a 00:03:27.187 CC lib/nvmf/tcp.o 00:03:27.187 SO libspdk_scsi.so.9.0 00:03:27.445 SYMLINK libspdk_scsi.so 00:03:27.445 CC lib/iscsi/conn.o 00:03:27.445 CC lib/iscsi/init_grp.o 00:03:27.703 CC lib/iscsi/iscsi.o 00:03:27.969 CC lib/ftl/ftl_writer.o 00:03:27.969 CC lib/ftl/ftl_rq.o 00:03:27.969 CC lib/ftl/ftl_reloc.o 00:03:27.969 CC lib/ftl/ftl_l2p_cache.o 00:03:28.250 CC lib/nvmf/stubs.o 00:03:28.250 CC lib/nvmf/mdns_server.o 00:03:28.250 CC lib/ftl/ftl_p2l.o 00:03:28.250 CC lib/vhost/vhost.o 00:03:28.250 CC lib/vhost/vhost_rpc.o 00:03:28.519 CC lib/vhost/vhost_scsi.o 00:03:28.519 CC lib/vhost/vhost_blk.o 00:03:28.519 CC lib/vhost/rte_vhost_user.o 00:03:28.777 CC lib/iscsi/md5.o 00:03:28.777 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.777 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.777 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.035 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:29.035 CC lib/nvmf/vfio_user.o 00:03:29.035 CC lib/nvmf/rdma.o 00:03:29.035 CC lib/nvmf/auth.o 00:03:29.035 CC lib/iscsi/param.o 00:03:29.294 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.294 CC lib/iscsi/portal_grp.o 00:03:29.552 CC lib/iscsi/tgt_node.o 00:03:29.552 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.552 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.552 CC lib/iscsi/iscsi_subsystem.o 00:03:29.809 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.809 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.809 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.809 CC lib/iscsi/iscsi_rpc.o 00:03:29.809 LIB libspdk_vhost.a 00:03:30.067 CC lib/iscsi/task.o 00:03:30.067 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:30.067 SO libspdk_vhost.so.8.0 00:03:30.067 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:30.067 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:30.067 SYMLINK libspdk_vhost.so 00:03:30.067 CC lib/ftl/utils/ftl_conf.o 00:03:30.067 CC lib/ftl/utils/ftl_md.o 00:03:30.067 CC lib/ftl/utils/ftl_mempool.o 00:03:30.324 CC lib/ftl/utils/ftl_bitmap.o 00:03:30.324 CC lib/ftl/utils/ftl_property.o 00:03:30.324 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:30.324 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.324 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.324 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.324 LIB libspdk_iscsi.a 00:03:30.581 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.581 SO libspdk_iscsi.so.8.0 00:03:30.581 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.581 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:30.581 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.581 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.581 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.581 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.838 CC lib/ftl/base/ftl_base_dev.o 00:03:30.838 SYMLINK libspdk_iscsi.so 00:03:30.838 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.838 CC lib/ftl/ftl_trace.o 00:03:31.096 LIB libspdk_ftl.a 00:03:31.354 SO libspdk_ftl.so.9.0 00:03:31.612 SYMLINK libspdk_ftl.so 00:03:31.870 LIB libspdk_nvmf.a 00:03:31.870 SO libspdk_nvmf.so.18.1 00:03:32.437 SYMLINK libspdk_nvmf.so 00:03:32.695 CC module/vfu_device/vfu_virtio.o 00:03:32.695 CC module/env_dpdk/env_dpdk_rpc.o 00:03:32.695 CC module/accel/error/accel_error.o 00:03:32.695 CC module/accel/ioat/accel_ioat.o 00:03:32.695 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:32.695 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.695 CC module/keyring/file/keyring.o 00:03:32.695 CC module/sock/posix/posix.o 00:03:32.695 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.695 CC module/blob/bdev/blob_bdev.o 00:03:32.695 LIB libspdk_env_dpdk_rpc.a 00:03:32.953 SO libspdk_env_dpdk_rpc.so.6.0 00:03:32.953 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.953 LIB libspdk_scheduler_gscheduler.a 00:03:32.953 CC module/keyring/file/keyring_rpc.o 00:03:32.953 CC module/accel/error/accel_error_rpc.o 00:03:32.953 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.953 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.953 CC module/accel/ioat/accel_ioat_rpc.o 00:03:32.953 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.953 LIB libspdk_scheduler_dynamic.a 00:03:32.953 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.953 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.953 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.953 CC module/vfu_device/vfu_virtio_blk.o 00:03:32.953 LIB libspdk_blob_bdev.a 00:03:32.953 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.953 CC module/sock/uring/uring.o 00:03:32.953 LIB libspdk_keyring_file.a 00:03:33.212 LIB libspdk_accel_error.a 00:03:33.212 SO libspdk_blob_bdev.so.11.0 00:03:33.212 LIB libspdk_accel_ioat.a 00:03:33.212 SO libspdk_accel_error.so.2.0 00:03:33.212 SO libspdk_keyring_file.so.1.0 00:03:33.212 SO libspdk_accel_ioat.so.6.0 00:03:33.212 SYMLINK libspdk_blob_bdev.so 00:03:33.212 CC module/vfu_device/vfu_virtio_scsi.o 00:03:33.212 SYMLINK libspdk_keyring_file.so 00:03:33.212 SYMLINK libspdk_accel_ioat.so 00:03:33.212 SYMLINK libspdk_accel_error.so 00:03:33.212 CC module/vfu_device/vfu_virtio_rpc.o 00:03:33.212 CC module/keyring/linux/keyring.o 00:03:33.212 CC module/accel/dsa/accel_dsa.o 00:03:33.470 CC module/accel/iaa/accel_iaa.o 00:03:33.470 CC module/keyring/linux/keyring_rpc.o 00:03:33.470 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.470 CC module/bdev/delay/vbdev_delay.o 00:03:33.470 CC module/bdev/error/vbdev_error.o 00:03:33.470 LIB libspdk_keyring_linux.a 00:03:33.728 SO libspdk_keyring_linux.so.1.0 00:03:33.728 CC module/bdev/error/vbdev_error_rpc.o 00:03:33.728 LIB libspdk_vfu_device.a 00:03:33.728 CC module/bdev/gpt/gpt.o 00:03:33.728 LIB libspdk_accel_dsa.a 00:03:33.728 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.728 SO libspdk_vfu_device.so.3.0 00:03:33.728 SYMLINK libspdk_keyring_linux.so 00:03:33.728 SO libspdk_accel_dsa.so.5.0 00:03:33.728 LIB libspdk_sock_posix.a 00:03:33.728 SO libspdk_sock_posix.so.6.0 00:03:33.728 SYMLINK libspdk_accel_dsa.so 00:03:33.728 LIB libspdk_accel_iaa.a 00:03:33.728 SYMLINK libspdk_vfu_device.so 00:03:33.728 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:33.728 SO libspdk_accel_iaa.so.3.0 00:03:33.728 LIB libspdk_bdev_error.a 00:03:33.986 SYMLINK libspdk_sock_posix.so 00:03:33.986 CC module/bdev/gpt/vbdev_gpt.o 00:03:33.986 SO libspdk_bdev_error.so.6.0 00:03:33.986 SYMLINK libspdk_accel_iaa.so 00:03:33.986 CC module/bdev/lvol/vbdev_lvol.o 00:03:33.986 SYMLINK libspdk_bdev_error.so 00:03:33.986 CC module/blobfs/bdev/blobfs_bdev.o 00:03:33.986 CC module/bdev/malloc/bdev_malloc.o 00:03:33.986 LIB libspdk_bdev_delay.a 00:03:33.986 LIB libspdk_sock_uring.a 00:03:33.986 SO libspdk_bdev_delay.so.6.0 00:03:33.986 CC module/bdev/null/bdev_null.o 00:03:33.986 SO libspdk_sock_uring.so.5.0 00:03:33.986 CC module/bdev/nvme/bdev_nvme.o 00:03:34.244 SYMLINK libspdk_bdev_delay.so 00:03:34.244 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.244 SYMLINK libspdk_sock_uring.so 00:03:34.244 CC module/bdev/raid/bdev_raid.o 00:03:34.244 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.244 LIB libspdk_bdev_gpt.a 00:03:34.244 SO libspdk_bdev_gpt.so.6.0 00:03:34.244 CC module/bdev/split/vbdev_split.o 00:03:34.244 SYMLINK libspdk_bdev_gpt.so 00:03:34.244 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.244 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.502 CC module/bdev/null/bdev_null_rpc.o 00:03:34.502 LIB libspdk_blobfs_bdev.a 00:03:34.502 SO libspdk_blobfs_bdev.so.6.0 00:03:34.502 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.502 SYMLINK libspdk_blobfs_bdev.so 00:03:34.502 LIB libspdk_bdev_malloc.a 00:03:34.502 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.502 SO libspdk_bdev_malloc.so.6.0 00:03:34.502 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.502 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.502 LIB libspdk_bdev_null.a 00:03:34.761 SO libspdk_bdev_null.so.6.0 00:03:34.761 SYMLINK libspdk_bdev_malloc.so 00:03:34.761 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.761 LIB libspdk_bdev_passthru.a 00:03:34.761 CC module/bdev/uring/bdev_uring.o 00:03:34.761 SYMLINK libspdk_bdev_null.so 00:03:34.761 SO libspdk_bdev_passthru.so.6.0 00:03:34.761 LIB libspdk_bdev_split.a 00:03:34.761 SO libspdk_bdev_split.so.6.0 00:03:34.761 LIB libspdk_bdev_zone_block.a 00:03:34.761 CC module/bdev/aio/bdev_aio.o 00:03:34.761 SO libspdk_bdev_zone_block.so.6.0 00:03:35.019 SYMLINK libspdk_bdev_passthru.so 00:03:35.019 SYMLINK libspdk_bdev_split.so 00:03:35.019 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.019 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.019 CC module/bdev/ftl/bdev_ftl.o 00:03:35.019 SYMLINK libspdk_bdev_zone_block.so 00:03:35.019 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.019 LIB libspdk_bdev_lvol.a 00:03:35.019 SO libspdk_bdev_lvol.so.6.0 00:03:35.277 CC module/bdev/raid/raid0.o 00:03:35.277 SYMLINK libspdk_bdev_lvol.so 00:03:35.277 CC module/bdev/raid/raid1.o 00:03:35.277 CC module/bdev/raid/concat.o 00:03:35.277 CC module/bdev/uring/bdev_uring_rpc.o 00:03:35.277 CC module/bdev/nvme/nvme_rpc.o 00:03:35.277 LIB libspdk_bdev_ftl.a 00:03:35.277 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.277 SO libspdk_bdev_ftl.so.6.0 00:03:35.277 LIB libspdk_bdev_uring.a 00:03:35.277 SYMLINK libspdk_bdev_ftl.so 00:03:35.277 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.536 SO libspdk_bdev_uring.so.6.0 00:03:35.536 CC module/bdev/nvme/vbdev_opal.o 00:03:35.536 LIB libspdk_bdev_aio.a 00:03:35.536 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.536 SO libspdk_bdev_aio.so.6.0 00:03:35.536 SYMLINK libspdk_bdev_uring.so 00:03:35.536 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.536 LIB libspdk_bdev_raid.a 00:03:35.536 SYMLINK libspdk_bdev_aio.so 00:03:35.536 SO libspdk_bdev_raid.so.6.0 00:03:35.536 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.536 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.536 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.794 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.794 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.794 SYMLINK libspdk_bdev_raid.so 00:03:36.052 LIB libspdk_bdev_iscsi.a 00:03:36.052 SO libspdk_bdev_iscsi.so.6.0 00:03:36.311 SYMLINK libspdk_bdev_iscsi.so 00:03:36.311 LIB libspdk_bdev_virtio.a 00:03:36.311 SO libspdk_bdev_virtio.so.6.0 00:03:36.570 SYMLINK libspdk_bdev_virtio.so 00:03:37.137 LIB libspdk_bdev_nvme.a 00:03:37.137 SO libspdk_bdev_nvme.so.7.0 00:03:37.395 SYMLINK libspdk_bdev_nvme.so 00:03:37.654 CC module/event/subsystems/keyring/keyring.o 00:03:37.654 CC module/event/subsystems/sock/sock.o 00:03:37.654 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.654 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.654 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.654 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.654 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.654 CC module/event/subsystems/vmd/vmd.o 00:03:37.912 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:37.912 LIB libspdk_event_keyring.a 00:03:37.912 LIB libspdk_event_sock.a 00:03:37.912 LIB libspdk_event_vhost_blk.a 00:03:37.912 LIB libspdk_event_scheduler.a 00:03:37.912 LIB libspdk_event_iobuf.a 00:03:37.912 SO libspdk_event_sock.so.5.0 00:03:37.912 SO libspdk_event_keyring.so.1.0 00:03:37.912 SO libspdk_event_vhost_blk.so.3.0 00:03:37.912 SO libspdk_event_scheduler.so.4.0 00:03:37.912 SO libspdk_event_iobuf.so.3.0 00:03:37.912 LIB libspdk_event_vmd.a 00:03:37.912 LIB libspdk_event_vfu_tgt.a 00:03:37.912 SYMLINK libspdk_event_sock.so 00:03:37.912 SO libspdk_event_vmd.so.6.0 00:03:37.912 SYMLINK libspdk_event_keyring.so 00:03:37.912 SYMLINK libspdk_event_vhost_blk.so 00:03:37.912 SO libspdk_event_vfu_tgt.so.3.0 00:03:38.170 SYMLINK libspdk_event_scheduler.so 00:03:38.170 SYMLINK libspdk_event_iobuf.so 00:03:38.170 SYMLINK libspdk_event_vfu_tgt.so 00:03:38.170 SYMLINK libspdk_event_vmd.so 00:03:38.170 CC module/event/subsystems/accel/accel.o 00:03:38.438 LIB libspdk_event_accel.a 00:03:38.438 SO libspdk_event_accel.so.6.0 00:03:38.438 SYMLINK libspdk_event_accel.so 00:03:39.005 CC module/event/subsystems/bdev/bdev.o 00:03:39.005 LIB libspdk_event_bdev.a 00:03:39.005 SO libspdk_event_bdev.so.6.0 00:03:39.262 SYMLINK libspdk_event_bdev.so 00:03:39.262 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:39.262 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:39.262 CC module/event/subsystems/nbd/nbd.o 00:03:39.262 CC module/event/subsystems/ublk/ublk.o 00:03:39.262 CC module/event/subsystems/scsi/scsi.o 00:03:39.519 LIB libspdk_event_nbd.a 00:03:39.519 LIB libspdk_event_ublk.a 00:03:39.519 LIB libspdk_event_scsi.a 00:03:39.519 SO libspdk_event_nbd.so.6.0 00:03:39.519 SO libspdk_event_ublk.so.3.0 00:03:39.519 SO libspdk_event_scsi.so.6.0 00:03:39.519 SYMLINK libspdk_event_ublk.so 00:03:39.777 SYMLINK libspdk_event_nbd.so 00:03:39.777 LIB libspdk_event_nvmf.a 00:03:39.777 SYMLINK libspdk_event_scsi.so 00:03:39.777 SO libspdk_event_nvmf.so.6.0 00:03:39.777 SYMLINK libspdk_event_nvmf.so 00:03:40.034 CC module/event/subsystems/iscsi/iscsi.o 00:03:40.034 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:40.034 LIB libspdk_event_vhost_scsi.a 00:03:40.034 SO libspdk_event_vhost_scsi.so.3.0 00:03:40.034 LIB libspdk_event_iscsi.a 00:03:40.292 SO libspdk_event_iscsi.so.6.0 00:03:40.292 SYMLINK libspdk_event_vhost_scsi.so 00:03:40.292 SYMLINK libspdk_event_iscsi.so 00:03:40.292 SO libspdk.so.6.0 00:03:40.292 SYMLINK libspdk.so 00:03:40.551 CXX app/trace/trace.o 00:03:40.551 CC app/trace_record/trace_record.o 00:03:40.551 TEST_HEADER include/spdk/accel.h 00:03:40.551 TEST_HEADER include/spdk/accel_module.h 00:03:40.551 TEST_HEADER include/spdk/assert.h 00:03:40.551 TEST_HEADER include/spdk/barrier.h 00:03:40.551 TEST_HEADER include/spdk/base64.h 00:03:40.551 TEST_HEADER include/spdk/bdev.h 00:03:40.551 TEST_HEADER include/spdk/bdev_module.h 00:03:40.551 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.551 TEST_HEADER include/spdk/bit_array.h 00:03:40.551 TEST_HEADER include/spdk/bit_pool.h 00:03:40.551 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.551 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.810 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.810 TEST_HEADER include/spdk/blobfs.h 00:03:40.810 TEST_HEADER include/spdk/blob.h 00:03:40.810 TEST_HEADER include/spdk/conf.h 00:03:40.810 TEST_HEADER include/spdk/config.h 00:03:40.810 TEST_HEADER include/spdk/cpuset.h 00:03:40.810 TEST_HEADER include/spdk/crc16.h 00:03:40.810 TEST_HEADER include/spdk/crc32.h 00:03:40.810 TEST_HEADER include/spdk/crc64.h 00:03:40.810 TEST_HEADER include/spdk/dif.h 00:03:40.810 TEST_HEADER include/spdk/dma.h 00:03:40.810 TEST_HEADER include/spdk/endian.h 00:03:40.810 CC app/nvmf_tgt/nvmf_main.o 00:03:40.810 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.810 TEST_HEADER include/spdk/env.h 00:03:40.810 TEST_HEADER include/spdk/event.h 00:03:40.810 TEST_HEADER include/spdk/fd_group.h 00:03:40.810 TEST_HEADER include/spdk/fd.h 00:03:40.810 TEST_HEADER include/spdk/file.h 00:03:40.810 TEST_HEADER include/spdk/ftl.h 00:03:40.810 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.810 TEST_HEADER include/spdk/hexlify.h 00:03:40.810 TEST_HEADER include/spdk/histogram_data.h 00:03:40.810 TEST_HEADER include/spdk/idxd.h 00:03:40.810 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.810 TEST_HEADER include/spdk/init.h 00:03:40.810 CC examples/util/zipf/zipf.o 00:03:40.810 TEST_HEADER include/spdk/ioat.h 00:03:40.810 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.810 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.810 TEST_HEADER include/spdk/json.h 00:03:40.810 CC test/thread/poller_perf/poller_perf.o 00:03:40.810 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.810 TEST_HEADER include/spdk/keyring.h 00:03:40.810 TEST_HEADER include/spdk/keyring_module.h 00:03:40.810 TEST_HEADER include/spdk/likely.h 00:03:40.810 TEST_HEADER include/spdk/log.h 00:03:40.810 TEST_HEADER include/spdk/lvol.h 00:03:40.810 CC examples/ioat/perf/perf.o 00:03:40.810 TEST_HEADER include/spdk/memory.h 00:03:40.810 TEST_HEADER include/spdk/mmio.h 00:03:40.810 TEST_HEADER include/spdk/nbd.h 00:03:40.810 TEST_HEADER include/spdk/notify.h 00:03:40.810 TEST_HEADER include/spdk/nvme.h 00:03:40.810 CC test/app/bdev_svc/bdev_svc.o 00:03:40.810 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.810 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.810 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.810 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.810 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.810 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.810 CC test/dma/test_dma/test_dma.o 00:03:40.810 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.810 TEST_HEADER include/spdk/nvmf.h 00:03:40.810 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.810 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.810 TEST_HEADER include/spdk/opal.h 00:03:40.810 TEST_HEADER include/spdk/opal_spec.h 00:03:40.810 TEST_HEADER include/spdk/pci_ids.h 00:03:40.810 TEST_HEADER include/spdk/pipe.h 00:03:40.810 TEST_HEADER include/spdk/queue.h 00:03:40.810 TEST_HEADER include/spdk/reduce.h 00:03:40.810 TEST_HEADER include/spdk/rpc.h 00:03:40.810 TEST_HEADER include/spdk/scheduler.h 00:03:40.810 TEST_HEADER include/spdk/scsi.h 00:03:40.810 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.810 TEST_HEADER include/spdk/sock.h 00:03:40.810 TEST_HEADER include/spdk/stdinc.h 00:03:40.810 TEST_HEADER include/spdk/string.h 00:03:40.810 TEST_HEADER include/spdk/thread.h 00:03:40.810 TEST_HEADER include/spdk/trace.h 00:03:40.810 TEST_HEADER include/spdk/trace_parser.h 00:03:40.810 TEST_HEADER include/spdk/tree.h 00:03:40.810 TEST_HEADER include/spdk/ublk.h 00:03:40.810 TEST_HEADER include/spdk/util.h 00:03:40.810 TEST_HEADER include/spdk/uuid.h 00:03:40.810 TEST_HEADER include/spdk/version.h 00:03:40.810 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.810 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.810 TEST_HEADER include/spdk/vhost.h 00:03:40.810 TEST_HEADER include/spdk/vmd.h 00:03:40.810 TEST_HEADER include/spdk/xor.h 00:03:40.810 TEST_HEADER include/spdk/zipf.h 00:03:40.810 CXX test/cpp_headers/accel.o 00:03:40.810 LINK interrupt_tgt 00:03:41.099 LINK zipf 00:03:41.099 LINK nvmf_tgt 00:03:41.099 LINK poller_perf 00:03:41.099 LINK spdk_trace_record 00:03:41.099 LINK bdev_svc 00:03:41.099 CXX test/cpp_headers/accel_module.o 00:03:41.099 LINK ioat_perf 00:03:41.099 LINK spdk_trace 00:03:41.383 LINK test_dma 00:03:41.383 CC app/iscsi_tgt/iscsi_tgt.o 00:03:41.383 CXX test/cpp_headers/assert.o 00:03:41.383 CC examples/sock/hello_world/hello_sock.o 00:03:41.383 CC examples/vmd/lsvmd/lsvmd.o 00:03:41.383 CC examples/ioat/verify/verify.o 00:03:41.383 CC examples/idxd/perf/perf.o 00:03:41.383 CC examples/thread/thread/thread_ex.o 00:03:41.383 CC examples/vmd/led/led.o 00:03:41.641 CXX test/cpp_headers/barrier.o 00:03:41.641 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:41.641 LINK iscsi_tgt 00:03:41.641 LINK lsvmd 00:03:41.641 LINK led 00:03:41.641 LINK hello_sock 00:03:41.641 CXX test/cpp_headers/base64.o 00:03:41.641 CC app/spdk_tgt/spdk_tgt.o 00:03:41.641 LINK verify 00:03:41.898 LINK thread 00:03:41.898 CC test/app/histogram_perf/histogram_perf.o 00:03:41.898 LINK idxd_perf 00:03:41.898 CXX test/cpp_headers/bdev.o 00:03:41.898 CC test/app/jsoncat/jsoncat.o 00:03:41.898 CC test/app/stub/stub.o 00:03:41.898 LINK spdk_tgt 00:03:42.155 LINK histogram_perf 00:03:42.155 LINK nvme_fuzz 00:03:42.155 CC test/event/event_perf/event_perf.o 00:03:42.155 LINK jsoncat 00:03:42.155 CC test/event/reactor/reactor.o 00:03:42.155 CXX test/cpp_headers/bdev_module.o 00:03:42.155 CC test/env/mem_callbacks/mem_callbacks.o 00:03:42.155 LINK stub 00:03:42.155 CC examples/nvme/hello_world/hello_world.o 00:03:42.155 LINK event_perf 00:03:42.412 LINK reactor 00:03:42.412 CC app/spdk_lspci/spdk_lspci.o 00:03:42.412 CC test/event/reactor_perf/reactor_perf.o 00:03:42.412 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:42.412 CXX test/cpp_headers/bdev_zone.o 00:03:42.412 CC test/event/app_repeat/app_repeat.o 00:03:42.412 CXX test/cpp_headers/bit_array.o 00:03:42.412 LINK spdk_lspci 00:03:42.412 LINK reactor_perf 00:03:42.669 LINK hello_world 00:03:42.669 CC test/event/scheduler/scheduler.o 00:03:42.669 LINK app_repeat 00:03:42.669 CC test/nvme/aer/aer.o 00:03:42.669 CXX test/cpp_headers/bit_pool.o 00:03:42.669 CC test/nvme/reset/reset.o 00:03:42.669 CC app/spdk_nvme_perf/perf.o 00:03:42.669 CC test/env/vtophys/vtophys.o 00:03:42.669 LINK mem_callbacks 00:03:42.926 CXX test/cpp_headers/blob_bdev.o 00:03:42.926 LINK scheduler 00:03:42.926 CC examples/nvme/reconnect/reconnect.o 00:03:42.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.926 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.926 LINK vtophys 00:03:42.926 LINK reset 00:03:42.926 LINK aer 00:03:43.184 LINK env_dpdk_post_init 00:03:43.184 CXX test/cpp_headers/blobfs.o 00:03:43.184 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.184 CXX test/cpp_headers/blob.o 00:03:43.184 CC examples/accel/perf/accel_perf.o 00:03:43.184 CC test/rpc_client/rpc_client_test.o 00:03:43.184 LINK reconnect 00:03:43.441 CC test/nvme/sgl/sgl.o 00:03:43.441 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.441 CC test/env/memory/memory_ut.o 00:03:43.441 CXX test/cpp_headers/conf.o 00:03:43.441 CC test/env/pci/pci_ut.o 00:03:43.441 LINK rpc_client_test 00:03:43.441 CXX test/cpp_headers/config.o 00:03:43.441 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:43.699 CXX test/cpp_headers/cpuset.o 00:03:43.699 CXX test/cpp_headers/crc16.o 00:03:43.699 LINK sgl 00:03:43.699 CXX test/cpp_headers/crc32.o 00:03:43.699 LINK accel_perf 00:03:43.956 LINK vhost_fuzz 00:03:43.956 CC app/spdk_nvme_identify/identify.o 00:03:43.956 LINK pci_ut 00:03:43.956 LINK spdk_nvme_perf 00:03:43.956 CC test/nvme/e2edp/nvme_dp.o 00:03:43.956 CXX test/cpp_headers/crc64.o 00:03:43.956 CXX test/cpp_headers/dif.o 00:03:43.956 CXX test/cpp_headers/dma.o 00:03:44.214 CXX test/cpp_headers/endian.o 00:03:44.214 LINK nvme_manage 00:03:44.214 CC test/nvme/err_injection/err_injection.o 00:03:44.214 CC test/nvme/overhead/overhead.o 00:03:44.214 LINK nvme_dp 00:03:44.472 CXX test/cpp_headers/env_dpdk.o 00:03:44.472 CC test/blobfs/mkfs/mkfs.o 00:03:44.472 CC test/accel/dif/dif.o 00:03:44.472 LINK err_injection 00:03:44.472 CXX test/cpp_headers/env.o 00:03:44.472 CC examples/nvme/arbitration/arbitration.o 00:03:44.472 LINK iscsi_fuzz 00:03:44.472 LINK overhead 00:03:44.730 LINK memory_ut 00:03:44.730 CXX test/cpp_headers/event.o 00:03:44.730 LINK mkfs 00:03:44.730 CC test/nvme/startup/startup.o 00:03:44.730 CXX test/cpp_headers/fd_group.o 00:03:44.988 CC test/lvol/esnap/esnap.o 00:03:44.988 LINK arbitration 00:03:44.988 LINK spdk_nvme_identify 00:03:44.988 LINK startup 00:03:44.988 CXX test/cpp_headers/fd.o 00:03:44.988 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.988 LINK dif 00:03:44.988 CC app/spdk_top/spdk_top.o 00:03:45.246 CC examples/blob/hello_world/hello_blob.o 00:03:45.246 CC examples/bdev/hello_world/hello_bdev.o 00:03:45.246 CXX test/cpp_headers/file.o 00:03:45.246 LINK spdk_nvme_discover 00:03:45.246 CXX test/cpp_headers/ftl.o 00:03:45.246 CC test/nvme/reserve/reserve.o 00:03:45.246 CC examples/nvme/hotplug/hotplug.o 00:03:45.531 CC app/vhost/vhost.o 00:03:45.531 LINK hello_blob 00:03:45.531 LINK hello_bdev 00:03:45.531 CXX test/cpp_headers/gpt_spec.o 00:03:45.531 CC examples/blob/cli/blobcli.o 00:03:45.531 LINK reserve 00:03:45.531 LINK hotplug 00:03:45.531 LINK vhost 00:03:45.531 CXX test/cpp_headers/hexlify.o 00:03:45.789 CC test/bdev/bdevio/bdevio.o 00:03:45.789 CC test/nvme/simple_copy/simple_copy.o 00:03:45.789 CXX test/cpp_headers/histogram_data.o 00:03:45.789 CC examples/bdev/bdevperf/bdevperf.o 00:03:45.789 CXX test/cpp_headers/idxd.o 00:03:45.789 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.789 CC examples/nvme/abort/abort.o 00:03:46.047 CXX test/cpp_headers/idxd_spec.o 00:03:46.047 LINK cmb_copy 00:03:46.047 LINK simple_copy 00:03:46.047 LINK blobcli 00:03:46.047 CC test/nvme/connect_stress/connect_stress.o 00:03:46.047 LINK bdevio 00:03:46.047 LINK spdk_top 00:03:46.305 CXX test/cpp_headers/init.o 00:03:46.306 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.306 LINK abort 00:03:46.306 CC test/nvme/boot_partition/boot_partition.o 00:03:46.306 LINK connect_stress 00:03:46.306 CXX test/cpp_headers/ioat.o 00:03:46.306 CC test/nvme/compliance/nvme_compliance.o 00:03:46.563 CC app/spdk_dd/spdk_dd.o 00:03:46.563 LINK boot_partition 00:03:46.563 LINK pmr_persistence 00:03:46.563 CC app/fio/nvme/fio_plugin.o 00:03:46.563 CXX test/cpp_headers/ioat_spec.o 00:03:46.563 CC test/nvme/fused_ordering/fused_ordering.o 00:03:46.563 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:46.821 CXX test/cpp_headers/iscsi_spec.o 00:03:46.821 LINK bdevperf 00:03:46.821 CC test/nvme/fdp/fdp.o 00:03:46.821 LINK nvme_compliance 00:03:46.821 LINK doorbell_aers 00:03:46.821 CC app/fio/bdev/fio_plugin.o 00:03:46.821 LINK fused_ordering 00:03:46.821 CXX test/cpp_headers/json.o 00:03:47.078 CXX test/cpp_headers/jsonrpc.o 00:03:47.078 CXX test/cpp_headers/keyring.o 00:03:47.078 CXX test/cpp_headers/keyring_module.o 00:03:47.078 CC test/nvme/cuse/cuse.o 00:03:47.078 LINK spdk_dd 00:03:47.336 CXX test/cpp_headers/likely.o 00:03:47.336 LINK fdp 00:03:47.336 CXX test/cpp_headers/log.o 00:03:47.336 CC examples/nvmf/nvmf/nvmf.o 00:03:47.336 LINK spdk_nvme 00:03:47.336 CXX test/cpp_headers/lvol.o 00:03:47.336 CXX test/cpp_headers/memory.o 00:03:47.336 CXX test/cpp_headers/mmio.o 00:03:47.336 CXX test/cpp_headers/nbd.o 00:03:47.336 CXX test/cpp_headers/notify.o 00:03:47.336 CXX test/cpp_headers/nvme.o 00:03:47.594 CXX test/cpp_headers/nvme_intel.o 00:03:47.594 LINK spdk_bdev 00:03:47.594 CXX test/cpp_headers/nvme_ocssd.o 00:03:47.594 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:47.594 CXX test/cpp_headers/nvme_spec.o 00:03:47.594 CXX test/cpp_headers/nvme_zns.o 00:03:47.594 CXX test/cpp_headers/nvmf_cmd.o 00:03:47.594 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:47.594 CXX test/cpp_headers/nvmf.o 00:03:47.594 LINK nvmf 00:03:47.594 CXX test/cpp_headers/nvmf_spec.o 00:03:47.851 CXX test/cpp_headers/nvmf_transport.o 00:03:47.851 CXX test/cpp_headers/opal.o 00:03:47.851 CXX test/cpp_headers/opal_spec.o 00:03:47.851 CXX test/cpp_headers/pci_ids.o 00:03:47.851 CXX test/cpp_headers/pipe.o 00:03:47.851 CXX test/cpp_headers/queue.o 00:03:47.851 CXX test/cpp_headers/reduce.o 00:03:47.851 CXX test/cpp_headers/rpc.o 00:03:47.851 CXX test/cpp_headers/scheduler.o 00:03:47.851 CXX test/cpp_headers/scsi.o 00:03:47.851 CXX test/cpp_headers/scsi_spec.o 00:03:47.851 CXX test/cpp_headers/sock.o 00:03:48.109 CXX test/cpp_headers/stdinc.o 00:03:48.109 CXX test/cpp_headers/string.o 00:03:48.109 CXX test/cpp_headers/thread.o 00:03:48.109 CXX test/cpp_headers/trace.o 00:03:48.109 CXX test/cpp_headers/trace_parser.o 00:03:48.109 CXX test/cpp_headers/tree.o 00:03:48.109 CXX test/cpp_headers/ublk.o 00:03:48.109 CXX test/cpp_headers/util.o 00:03:48.109 CXX test/cpp_headers/uuid.o 00:03:48.109 CXX test/cpp_headers/version.o 00:03:48.109 CXX test/cpp_headers/vfio_user_pci.o 00:03:48.109 CXX test/cpp_headers/vfio_user_spec.o 00:03:48.367 CXX test/cpp_headers/vhost.o 00:03:48.367 CXX test/cpp_headers/vmd.o 00:03:48.367 CXX test/cpp_headers/xor.o 00:03:48.367 CXX test/cpp_headers/zipf.o 00:03:48.626 LINK cuse 00:03:51.913 LINK esnap 00:03:52.171 00:03:52.171 real 1m16.268s 00:03:52.171 user 7m29.559s 00:03:52.171 sys 1m32.258s 00:03:52.171 ************************************ 00:03:52.171 END TEST make 00:03:52.171 ************************************ 00:03:52.171 21:03:03 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:52.171 21:03:03 make -- common/autotest_common.sh@10 -- $ set +x 00:03:52.171 21:03:03 -- common/autotest_common.sh@1142 -- $ return 0 00:03:52.171 21:03:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:52.171 21:03:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:52.171 21:03:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:52.171 21:03:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.171 21:03:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:52.171 21:03:03 -- pm/common@44 -- $ pid=5202 00:03:52.171 21:03:03 -- pm/common@50 -- $ kill -TERM 5202 00:03:52.171 21:03:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.171 21:03:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:52.171 21:03:03 -- pm/common@44 -- $ pid=5204 00:03:52.171 21:03:03 -- pm/common@50 -- $ kill -TERM 5204 00:03:52.171 21:03:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.171 21:03:03 -- nvmf/common.sh@7 -- # uname -s 00:03:52.171 21:03:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.171 21:03:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.171 21:03:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.171 21:03:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.171 21:03:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.171 21:03:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.171 21:03:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.171 21:03:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.171 21:03:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.171 21:03:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.171 21:03:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:03:52.171 21:03:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:03:52.171 21:03:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.171 21:03:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.171 21:03:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:52.171 21:03:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.171 21:03:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:52.171 21:03:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.171 21:03:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.171 21:03:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.171 21:03:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.171 21:03:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.171 21:03:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.171 21:03:03 -- paths/export.sh@5 -- # export PATH 00:03:52.171 21:03:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.171 21:03:03 -- nvmf/common.sh@47 -- # : 0 00:03:52.171 21:03:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:52.171 21:03:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:52.171 21:03:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.171 21:03:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.171 21:03:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.171 21:03:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:52.171 21:03:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:52.171 21:03:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:52.171 21:03:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:52.171 21:03:03 -- spdk/autotest.sh@32 -- # uname -s 00:03:52.171 21:03:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:52.171 21:03:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:52.171 21:03:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:52.171 21:03:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:52.171 21:03:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:52.171 21:03:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.171 21:03:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.171 21:03:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:52.171 21:03:03 -- spdk/autotest.sh@48 -- # udevadm_pid=53503 00:03:52.171 21:03:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:52.171 21:03:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:52.171 21:03:03 -- pm/common@17 -- # local monitor 00:03:52.171 21:03:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.171 21:03:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.171 21:03:03 -- pm/common@25 -- # sleep 1 00:03:52.171 21:03:03 -- pm/common@21 -- # date +%s 00:03:52.171 21:03:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720990983 00:03:52.171 21:03:03 -- pm/common@21 -- # date +%s 00:03:52.171 21:03:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720990983 00:03:52.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720990983_collect-vmstat.pm.log 00:03:52.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720990983_collect-cpu-load.pm.log 00:03:53.388 21:03:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:53.388 21:03:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:53.388 21:03:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.388 21:03:04 -- common/autotest_common.sh@10 -- # set +x 00:03:53.388 21:03:04 -- spdk/autotest.sh@59 -- # create_test_list 00:03:53.388 21:03:04 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:53.388 21:03:04 -- common/autotest_common.sh@10 -- # set +x 00:03:53.388 21:03:04 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:53.388 21:03:04 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:53.388 21:03:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:53.388 21:03:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:53.388 21:03:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:53.388 21:03:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:53.388 21:03:04 -- common/autotest_common.sh@1455 -- # uname 00:03:53.388 21:03:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:53.388 21:03:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:53.388 21:03:04 -- common/autotest_common.sh@1475 -- # uname 00:03:53.388 21:03:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:53.388 21:03:04 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:53.388 21:03:04 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:53.388 21:03:04 -- spdk/autotest.sh@72 -- # hash lcov 00:03:53.388 21:03:04 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:53.388 21:03:04 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:53.388 --rc lcov_branch_coverage=1 00:03:53.388 --rc lcov_function_coverage=1 00:03:53.388 --rc genhtml_branch_coverage=1 00:03:53.388 --rc genhtml_function_coverage=1 00:03:53.388 --rc genhtml_legend=1 00:03:53.388 --rc geninfo_all_blocks=1 00:03:53.388 ' 00:03:53.388 21:03:04 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:53.388 --rc lcov_branch_coverage=1 00:03:53.388 --rc lcov_function_coverage=1 00:03:53.388 --rc genhtml_branch_coverage=1 00:03:53.388 --rc genhtml_function_coverage=1 00:03:53.388 --rc genhtml_legend=1 00:03:53.388 --rc geninfo_all_blocks=1 00:03:53.388 ' 00:03:53.388 21:03:04 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:53.388 --rc lcov_branch_coverage=1 00:03:53.388 --rc lcov_function_coverage=1 00:03:53.388 --rc genhtml_branch_coverage=1 00:03:53.388 --rc genhtml_function_coverage=1 00:03:53.388 --rc genhtml_legend=1 00:03:53.388 --rc geninfo_all_blocks=1 00:03:53.388 --no-external' 00:03:53.388 21:03:04 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:53.388 --rc lcov_branch_coverage=1 00:03:53.388 --rc lcov_function_coverage=1 00:03:53.388 --rc genhtml_branch_coverage=1 00:03:53.388 --rc genhtml_function_coverage=1 00:03:53.388 --rc genhtml_legend=1 00:03:53.388 --rc geninfo_all_blocks=1 00:03:53.388 --no-external' 00:03:53.388 21:03:04 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:53.388 lcov: LCOV version 1.14 00:03:53.388 21:03:04 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:08.263 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:08.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:20.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:20.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:20.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:20.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:23.795 21:03:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:23.795 21:03:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.795 21:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:23.795 21:03:34 -- spdk/autotest.sh@91 -- # rm -f 00:04:23.795 21:03:34 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.795 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:23.795 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:23.795 21:03:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:23.795 21:03:35 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:23.795 21:03:35 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:23.795 21:03:35 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:23.795 21:03:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:23.795 21:03:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:23.795 21:03:35 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:23.795 21:03:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:23.795 21:03:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:23.795 21:03:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:23.795 21:03:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:23.795 21:03:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:23.795 21:03:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:23.795 21:03:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:23.795 21:03:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:23.795 21:03:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:23.795 21:03:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:23.795 21:03:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:23.795 21:03:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:23.795 21:03:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.795 21:03:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.795 21:03:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:23.795 21:03:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:23.795 21:03:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:24.054 No valid GPT data, bailing 00:04:24.054 21:03:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.054 21:03:35 -- scripts/common.sh@391 -- # pt= 00:04:24.054 21:03:35 -- scripts/common.sh@392 -- # return 1 00:04:24.054 21:03:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:24.054 1+0 records in 00:04:24.054 1+0 records out 00:04:24.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430392 s, 244 MB/s 00:04:24.054 21:03:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.054 21:03:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.054 21:03:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:24.054 21:03:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:24.054 21:03:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:24.054 No valid GPT data, bailing 00:04:24.054 21:03:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:24.054 21:03:35 -- scripts/common.sh@391 -- # pt= 00:04:24.054 21:03:35 -- scripts/common.sh@392 -- # return 1 00:04:24.054 21:03:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:24.054 1+0 records in 00:04:24.054 1+0 records out 00:04:24.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476798 s, 220 MB/s 00:04:24.054 21:03:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.054 21:03:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.054 21:03:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:24.054 21:03:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:24.054 21:03:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:24.054 No valid GPT data, bailing 00:04:24.054 21:03:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:24.054 21:03:35 -- scripts/common.sh@391 -- # pt= 00:04:24.054 21:03:35 -- scripts/common.sh@392 -- # return 1 00:04:24.054 21:03:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:24.054 1+0 records in 00:04:24.054 1+0 records out 00:04:24.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0033292 s, 315 MB/s 00:04:24.054 21:03:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.054 21:03:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:24.054 21:03:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:24.054 21:03:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:24.054 21:03:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:24.313 No valid GPT data, bailing 00:04:24.313 21:03:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:24.313 21:03:35 -- scripts/common.sh@391 -- # pt= 00:04:24.313 21:03:35 -- scripts/common.sh@392 -- # return 1 00:04:24.313 21:03:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:24.313 1+0 records in 00:04:24.313 1+0 records out 00:04:24.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494416 s, 212 MB/s 00:04:24.313 21:03:35 -- spdk/autotest.sh@118 -- # sync 00:04:24.313 21:03:35 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.313 21:03:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.313 21:03:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:26.219 21:03:37 -- spdk/autotest.sh@124 -- # uname -s 00:04:26.219 21:03:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:26.219 21:03:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:26.219 21:03:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.219 21:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.219 21:03:37 -- common/autotest_common.sh@10 -- # set +x 00:04:26.219 ************************************ 00:04:26.219 START TEST setup.sh 00:04:26.219 ************************************ 00:04:26.219 21:03:37 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:26.219 * Looking for test storage... 00:04:26.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.219 21:03:37 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:26.219 21:03:37 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:26.219 21:03:37 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:26.219 21:03:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.219 21:03:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.219 21:03:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.219 ************************************ 00:04:26.219 START TEST acl 00:04:26.219 ************************************ 00:04:26.219 21:03:37 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:26.478 * Looking for test storage... 00:04:26.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.478 21:03:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:26.478 21:03:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:26.478 21:03:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:26.478 21:03:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:26.478 21:03:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:26.478 21:03:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:26.478 21:03:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:26.478 21:03:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.478 21:03:37 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.047 21:03:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:27.047 21:03:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:27.047 21:03:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.047 21:03:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:27.047 21:03:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.047 21:03:38 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.615 Hugepages 00:04:27.615 node hugesize free / total 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.615 00:04:27.615 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.615 21:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.874 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:27.875 21:03:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:27.875 21:03:39 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.875 21:03:39 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.875 21:03:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.875 ************************************ 00:04:27.875 START TEST denied 00:04:27.875 ************************************ 00:04:27.875 21:03:39 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:27.875 21:03:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:27.875 21:03:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:27.875 21:03:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:27.875 21:03:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.875 21:03:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.810 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.810 21:03:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.376 00:04:29.376 real 0m1.352s 00:04:29.376 user 0m0.560s 00:04:29.376 sys 0m0.757s 00:04:29.376 21:03:40 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.376 21:03:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:29.376 ************************************ 00:04:29.376 END TEST denied 00:04:29.376 ************************************ 00:04:29.376 21:03:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:29.376 21:03:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:29.376 21:03:40 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.376 21:03:40 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.376 21:03:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.376 ************************************ 00:04:29.376 START TEST allowed 00:04:29.376 ************************************ 00:04:29.376 21:03:40 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:29.376 21:03:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:29.376 21:03:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:29.376 21:03:40 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:29.376 21:03:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.376 21:03:40 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.311 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.311 21:03:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.878 00:04:30.878 real 0m1.499s 00:04:30.878 user 0m0.661s 00:04:30.878 sys 0m0.820s 00:04:30.878 21:03:42 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.878 21:03:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 ************************************ 00:04:30.878 END TEST allowed 00:04:30.878 ************************************ 00:04:30.878 21:03:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:30.878 00:04:30.878 real 0m4.626s 00:04:30.878 user 0m2.043s 00:04:30.878 sys 0m2.536s 00:04:30.878 21:03:42 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.878 21:03:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 ************************************ 00:04:30.878 END TEST acl 00:04:30.878 ************************************ 00:04:30.878 21:03:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.878 21:03:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:30.878 21:03:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.878 21:03:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.878 21:03:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.878 ************************************ 00:04:30.878 START TEST hugepages 00:04:30.878 ************************************ 00:04:30.878 21:03:42 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:31.138 * Looking for test storage... 00:04:31.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.138 21:03:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5801496 kB' 'MemAvailable: 7393060 kB' 'Buffers: 2436 kB' 'Cached: 1805244 kB' 'SwapCached: 0 kB' 'Active: 436040 kB' 'Inactive: 1477108 kB' 'Active(anon): 115956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 107112 kB' 'Mapped: 48780 kB' 'Shmem: 10488 kB' 'KReclaimable: 62620 kB' 'Slab: 134480 kB' 'SReclaimable: 62620 kB' 'SUnreclaim: 71860 kB' 'KernelStack: 6596 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 339184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.139 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.140 21:03:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:31.140 21:03:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.140 21:03:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.140 21:03:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.140 ************************************ 00:04:31.140 START TEST default_setup 00:04:31.140 ************************************ 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.140 21:03:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.969 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.969 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7844948 kB' 'MemAvailable: 9436368 kB' 'Buffers: 2436 kB' 'Cached: 1805232 kB' 'SwapCached: 0 kB' 'Active: 452480 kB' 'Inactive: 1477108 kB' 'Active(anon): 132396 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123488 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134256 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71920 kB' 'KernelStack: 6576 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.969 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7844948 kB' 'MemAvailable: 9436368 kB' 'Buffers: 2436 kB' 'Cached: 1805232 kB' 'SwapCached: 0 kB' 'Active: 452564 kB' 'Inactive: 1477108 kB' 'Active(anon): 132480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123560 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134264 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 6576 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.970 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7844948 kB' 'MemAvailable: 9436368 kB' 'Buffers: 2436 kB' 'Cached: 1805232 kB' 'SwapCached: 0 kB' 'Active: 452524 kB' 'Inactive: 1477108 kB' 'Active(anon): 132440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123520 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134252 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71916 kB' 'KernelStack: 6560 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.971 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:31.972 nr_hugepages=1024 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.972 resv_hugepages=0 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.972 surplus_hugepages=0 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.972 anon_hugepages=0 00:04:31.972 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7844948 kB' 'MemAvailable: 9436368 kB' 'Buffers: 2436 kB' 'Cached: 1805232 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1477108 kB' 'Active(anon): 132544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123648 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134240 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71904 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.973 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7846564 kB' 'MemUsed: 4395408 kB' 'SwapCached: 0 kB' 'Active: 452568 kB' 'Inactive: 1477108 kB' 'Active(anon): 132484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1807668 kB' 'Mapped: 48792 kB' 'AnonPages: 123624 kB' 'Shmem: 10464 kB' 'KernelStack: 6576 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134240 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.974 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.975 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.976 node0=1024 expecting 1024 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.976 00:04:31.976 real 0m0.949s 00:04:31.976 user 0m0.471s 00:04:31.976 sys 0m0.422s 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.976 21:03:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:31.976 ************************************ 00:04:31.976 END TEST default_setup 00:04:31.976 ************************************ 00:04:32.234 21:03:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.234 21:03:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:32.234 21:03:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.234 21:03:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.234 21:03:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 ************************************ 00:04:32.234 START TEST per_node_1G_alloc 00:04:32.234 ************************************ 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.234 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.496 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.496 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.496 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:32.496 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8903516 kB' 'MemAvailable: 10494948 kB' 'Buffers: 2436 kB' 'Cached: 1805232 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1477120 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123412 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6516 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.497 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8903776 kB' 'MemAvailable: 10495212 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452860 kB' 'Inactive: 1477124 kB' 'Active(anon): 132776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123936 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134336 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72000 kB' 'KernelStack: 6600 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.498 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.499 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8903808 kB' 'MemAvailable: 10495244 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452380 kB' 'Inactive: 1477124 kB' 'Active(anon): 132296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123468 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134340 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72004 kB' 'KernelStack: 6592 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.500 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.501 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.502 nr_hugepages=512 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:32.502 resv_hugepages=0 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.502 surplus_hugepages=0 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.502 anon_hugepages=0 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8903808 kB' 'MemAvailable: 10495244 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452416 kB' 'Inactive: 1477124 kB' 'Active(anon): 132332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123500 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134340 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72004 kB' 'KernelStack: 6608 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.502 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.503 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8903808 kB' 'MemUsed: 3338164 kB' 'SwapCached: 0 kB' 'Active: 452604 kB' 'Inactive: 1477124 kB' 'Active(anon): 132520 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1807672 kB' 'Mapped: 48792 kB' 'AnonPages: 123676 kB' 'Shmem: 10464 kB' 'KernelStack: 6592 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134336 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.504 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.763 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.764 node0=512 expecting 512 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:32.764 00:04:32.764 real 0m0.522s 00:04:32.764 user 0m0.256s 00:04:32.764 sys 0m0.302s 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.764 21:03:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.764 ************************************ 00:04:32.764 END TEST per_node_1G_alloc 00:04:32.764 ************************************ 00:04:32.764 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.764 21:03:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:32.764 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.764 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.764 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.764 ************************************ 00:04:32.764 START TEST even_2G_alloc 00:04:32.764 ************************************ 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.764 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.025 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.025 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7861500 kB' 'MemAvailable: 9452936 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452508 kB' 'Inactive: 1477124 kB' 'Active(anon): 132424 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123504 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134356 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72020 kB' 'KernelStack: 6576 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.026 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7866920 kB' 'MemAvailable: 9458356 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452608 kB' 'Inactive: 1477124 kB' 'Active(anon): 132524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123672 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134352 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6576 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.027 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.028 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7866672 kB' 'MemAvailable: 9458108 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452664 kB' 'Inactive: 1477124 kB' 'Active(anon): 132580 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123864 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134352 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6640 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.029 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.030 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.031 nr_hugepages=1024 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.031 resv_hugepages=0 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.031 surplus_hugepages=0 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.031 anon_hugepages=0 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7867572 kB' 'MemAvailable: 9459004 kB' 'Buffers: 2436 kB' 'Cached: 1805232 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1477120 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123432 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134324 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71988 kB' 'KernelStack: 6544 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.031 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.032 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.291 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7867572 kB' 'MemUsed: 4374400 kB' 'SwapCached: 0 kB' 'Active: 452364 kB' 'Inactive: 1477124 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1807672 kB' 'Mapped: 48844 kB' 'AnonPages: 123688 kB' 'Shmem: 10464 kB' 'KernelStack: 6576 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134320 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.292 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.293 node0=1024 expecting 1024 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.293 00:04:33.293 real 0m0.500s 00:04:33.293 user 0m0.275s 00:04:33.293 sys 0m0.259s 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.293 21:03:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.293 ************************************ 00:04:33.293 END TEST even_2G_alloc 00:04:33.293 ************************************ 00:04:33.293 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.293 21:03:44 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:33.293 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.293 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.293 21:03:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.293 ************************************ 00:04:33.293 START TEST odd_alloc 00:04:33.293 ************************************ 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.293 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.294 21:03:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.554 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.554 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.554 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7864824 kB' 'MemAvailable: 9456260 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452936 kB' 'Inactive: 1477124 kB' 'Active(anon): 132852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 124248 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134336 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72000 kB' 'KernelStack: 6564 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.555 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7865220 kB' 'MemAvailable: 9456656 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452756 kB' 'Inactive: 1477124 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123860 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134352 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6576 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.556 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.557 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7865220 kB' 'MemAvailable: 9456656 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452312 kB' 'Inactive: 1477124 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134340 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72004 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.558 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.819 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.820 nr_hugepages=1025 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:33.820 resv_hugepages=0 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.820 surplus_hugepages=0 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.820 anon_hugepages=0 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7865220 kB' 'MemAvailable: 9456656 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452572 kB' 'Inactive: 1477124 kB' 'Active(anon): 132488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123628 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134340 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72004 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.820 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.821 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7865220 kB' 'MemUsed: 4376752 kB' 'SwapCached: 0 kB' 'Active: 452280 kB' 'Inactive: 1477124 kB' 'Active(anon): 132196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1807672 kB' 'Mapped: 48844 kB' 'AnonPages: 123336 kB' 'Shmem: 10464 kB' 'KernelStack: 6528 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134336 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.822 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.823 node0=1025 expecting 1025 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:33.823 00:04:33.823 real 0m0.509s 00:04:33.823 user 0m0.272s 00:04:33.823 sys 0m0.269s 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.823 21:03:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.823 ************************************ 00:04:33.823 END TEST odd_alloc 00:04:33.823 ************************************ 00:04:33.823 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.823 21:03:45 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:33.823 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.823 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.823 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.823 ************************************ 00:04:33.823 START TEST custom_alloc 00:04:33.823 ************************************ 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:33.823 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.824 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.084 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.084 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925084 kB' 'MemAvailable: 10516520 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452896 kB' 'Inactive: 1477124 kB' 'Active(anon): 132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134360 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72024 kB' 'KernelStack: 6596 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.084 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925084 kB' 'MemAvailable: 10516520 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452568 kB' 'Inactive: 1477124 kB' 'Active(anon): 132484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123832 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134376 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72040 kB' 'KernelStack: 6596 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.085 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.086 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.349 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8925780 kB' 'MemAvailable: 10517216 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452272 kB' 'Inactive: 1477124 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123584 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134352 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6576 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.350 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.351 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.352 nr_hugepages=512 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:34.352 resv_hugepages=0 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.352 surplus_hugepages=0 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.352 anon_hugepages=0 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8926036 kB' 'MemAvailable: 10517472 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452392 kB' 'Inactive: 1477124 kB' 'Active(anon): 132308 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123704 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134348 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.352 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.353 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8926036 kB' 'MemUsed: 3315936 kB' 'SwapCached: 0 kB' 'Active: 452264 kB' 'Inactive: 1477124 kB' 'Active(anon): 132180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1807672 kB' 'Mapped: 48796 kB' 'AnonPages: 123572 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134344 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 72008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.354 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.355 node0=512 expecting 512 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.355 00:04:34.355 real 0m0.525s 00:04:34.355 user 0m0.254s 00:04:34.355 sys 0m0.302s 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.355 21:03:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.355 ************************************ 00:04:34.355 END TEST custom_alloc 00:04:34.355 ************************************ 00:04:34.355 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.355 21:03:45 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:34.355 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.355 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.355 21:03:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.355 ************************************ 00:04:34.355 START TEST no_shrink_alloc 00:04:34.355 ************************************ 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.355 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:34.356 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:34.356 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.356 21:03:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.615 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.615 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.615 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7926492 kB' 'MemAvailable: 9517928 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 453012 kB' 'Inactive: 1477124 kB' 'Active(anon): 132928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134332 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6628 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.616 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.880 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7926492 kB' 'MemAvailable: 9517928 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452592 kB' 'Inactive: 1477124 kB' 'Active(anon): 132508 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123580 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6544 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.881 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7926492 kB' 'MemAvailable: 9517928 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452676 kB' 'Inactive: 1477124 kB' 'Active(anon): 132592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123704 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6576 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.882 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.883 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.884 nr_hugepages=1024 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.884 resv_hugepages=0 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.884 surplus_hugepages=0 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.884 anon_hugepages=0 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.884 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7926492 kB' 'MemAvailable: 9517928 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1477124 kB' 'Active(anon): 132544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123704 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6576 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.885 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7926492 kB' 'MemUsed: 4315480 kB' 'SwapCached: 0 kB' 'Active: 452684 kB' 'Inactive: 1477124 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1807672 kB' 'Mapped: 48796 kB' 'AnonPages: 123704 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.886 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.887 node0=1024 expecting 1024 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.887 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.147 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.147 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.147 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918192 kB' 'MemAvailable: 9509628 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 453452 kB' 'Inactive: 1477124 kB' 'Active(anon): 133368 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 124232 kB' 'Mapped: 49168 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134320 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71984 kB' 'KernelStack: 6608 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.147 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.148 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918296 kB' 'MemAvailable: 9509732 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1477124 kB' 'Active(anon): 132584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123724 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6576 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.149 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.412 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.413 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918296 kB' 'MemAvailable: 9509732 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452688 kB' 'Inactive: 1477124 kB' 'Active(anon): 132604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123712 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6576 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.414 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.415 nr_hugepages=1024 00:04:35.415 resv_hugepages=0 00:04:35.415 surplus_hugepages=0 00:04:35.415 anon_hugepages=0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.415 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918296 kB' 'MemAvailable: 9509732 kB' 'Buffers: 2436 kB' 'Cached: 1805236 kB' 'SwapCached: 0 kB' 'Active: 452684 kB' 'Inactive: 1477124 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123720 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134324 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71988 kB' 'KernelStack: 6560 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.416 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.417 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7918296 kB' 'MemUsed: 4323676 kB' 'SwapCached: 0 kB' 'Active: 452576 kB' 'Inactive: 1477124 kB' 'Active(anon): 132492 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1477124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1807672 kB' 'Mapped: 48796 kB' 'AnonPages: 123596 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134328 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.418 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.419 node0=1024 expecting 1024 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.419 00:04:35.419 real 0m1.041s 00:04:35.419 user 0m0.532s 00:04:35.419 sys 0m0.540s 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.419 21:03:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.419 ************************************ 00:04:35.419 END TEST no_shrink_alloc 00:04:35.419 ************************************ 00:04:35.419 21:03:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.419 21:03:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.419 00:04:35.419 real 0m4.510s 00:04:35.419 user 0m2.207s 00:04:35.419 sys 0m2.353s 00:04:35.419 21:03:46 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.419 21:03:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.419 ************************************ 00:04:35.419 END TEST hugepages 00:04:35.419 ************************************ 00:04:35.419 21:03:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:35.419 21:03:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:35.419 21:03:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.419 21:03:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.419 21:03:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.419 ************************************ 00:04:35.419 START TEST driver 00:04:35.419 ************************************ 00:04:35.419 21:03:46 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:35.678 * Looking for test storage... 00:04:35.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:35.678 21:03:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:35.678 21:03:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.678 21:03:47 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.245 21:03:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.245 21:03:47 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.245 21:03:47 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.245 21:03:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.245 ************************************ 00:04:36.245 START TEST guess_driver 00:04:36.245 ************************************ 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.245 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:36.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:36.246 Looking for driver=uio_pci_generic 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.246 21:03:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.814 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:36.814 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:36.814 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.814 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.814 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:36.814 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.073 21:03:48 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.653 00:04:37.653 real 0m1.392s 00:04:37.653 user 0m0.530s 00:04:37.653 sys 0m0.876s 00:04:37.653 ************************************ 00:04:37.653 END TEST guess_driver 00:04:37.653 ************************************ 00:04:37.653 21:03:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.653 21:03:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.653 21:03:49 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:37.654 ************************************ 00:04:37.654 END TEST driver 00:04:37.654 ************************************ 00:04:37.654 00:04:37.654 real 0m2.062s 00:04:37.654 user 0m0.756s 00:04:37.654 sys 0m1.369s 00:04:37.654 21:03:49 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.654 21:03:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.654 21:03:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.654 21:03:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:37.654 21:03:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.654 21:03:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.654 21:03:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.654 ************************************ 00:04:37.654 START TEST devices 00:04:37.654 ************************************ 00:04:37.654 21:03:49 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:37.654 * Looking for test storage... 00:04:37.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.654 21:03:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:37.654 21:03:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:37.654 21:03:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.654 21:03:49 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.601 21:03:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:38.601 21:03:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:38.602 21:03:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:38.602 21:03:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:38.602 No valid GPT data, bailing 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:38.602 21:03:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:38.602 21:03:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:38.602 21:03:49 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:38.602 21:03:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:38.602 21:03:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:38.602 No valid GPT data, bailing 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:38.602 21:03:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:38.602 21:03:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:38.602 21:03:50 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:38.602 No valid GPT data, bailing 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:38.602 21:03:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:38.602 21:03:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:38.602 21:03:50 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:38.602 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:38.602 21:03:50 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:38.861 No valid GPT data, bailing 00:04:38.861 21:03:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:38.861 21:03:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:38.861 21:03:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:38.861 21:03:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:38.861 21:03:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:38.861 21:03:50 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:38.861 21:03:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:38.861 21:03:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.861 21:03:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.861 21:03:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.861 ************************************ 00:04:38.861 START TEST nvme_mount 00:04:38.861 ************************************ 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.861 21:03:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:39.798 Creating new GPT entries in memory. 00:04:39.798 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.798 other utilities. 00:04:39.798 21:03:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.798 21:03:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.798 21:03:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.798 21:03:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.798 21:03:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:40.734 Creating new GPT entries in memory. 00:04:40.734 The operation has completed successfully. 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57698 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:40.734 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.993 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.252 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.252 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.252 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:41.252 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:41.511 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.512 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.512 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.512 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.512 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.512 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.512 21:03:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.771 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:41.771 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:41.771 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:41.771 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.771 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.031 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.289 21:03:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.547 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.547 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.547 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.547 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.547 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.547 21:03:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.806 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.806 00:04:42.806 real 0m4.041s 00:04:42.806 user 0m0.707s 00:04:42.806 sys 0m1.075s 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.806 21:03:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.806 ************************************ 00:04:42.806 END TEST nvme_mount 00:04:42.806 ************************************ 00:04:42.806 21:03:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:42.806 21:03:54 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.806 21:03:54 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.806 21:03:54 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.806 21:03:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.806 ************************************ 00:04:42.806 START TEST dm_mount 00:04:42.806 ************************************ 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.806 21:03:54 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:44.182 Creating new GPT entries in memory. 00:04:44.182 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.182 other utilities. 00:04:44.182 21:03:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.182 21:03:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.182 21:03:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.182 21:03:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.182 21:03:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:45.117 Creating new GPT entries in memory. 00:04:45.117 The operation has completed successfully. 00:04:45.117 21:03:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.117 21:03:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.117 21:03:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.117 21:03:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.117 21:03:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:46.052 The operation has completed successfully. 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58131 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.052 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.311 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.571 21:03:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:46.830 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:47.089 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:47.089 00:04:47.089 real 0m4.229s 00:04:47.089 user 0m0.495s 00:04:47.089 sys 0m0.700s 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.089 21:03:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.089 ************************************ 00:04:47.089 END TEST dm_mount 00:04:47.089 ************************************ 00:04:47.089 21:03:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.089 21:03:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.349 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.349 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.349 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.349 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.349 21:03:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.349 00:04:47.349 real 0m9.802s 00:04:47.349 user 0m1.878s 00:04:47.349 sys 0m2.335s 00:04:47.349 21:03:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.349 ************************************ 00:04:47.349 END TEST devices 00:04:47.349 ************************************ 00:04:47.349 21:03:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.349 21:03:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:47.349 00:04:47.349 real 0m21.285s 00:04:47.349 user 0m6.987s 00:04:47.349 sys 0m8.762s 00:04:47.349 21:03:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.349 21:03:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.349 ************************************ 00:04:47.349 END TEST setup.sh 00:04:47.349 ************************************ 00:04:47.608 21:03:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.608 21:03:58 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:48.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.176 Hugepages 00:04:48.176 node hugesize free / total 00:04:48.176 node0 1048576kB 0 / 0 00:04:48.176 node0 2048kB 2048 / 2048 00:04:48.176 00:04:48.176 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.176 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:48.434 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:48.434 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:48.434 21:03:59 -- spdk/autotest.sh@130 -- # uname -s 00:04:48.434 21:03:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:48.434 21:03:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:48.434 21:03:59 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.258 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.258 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.259 21:04:00 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:50.221 21:04:01 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:50.221 21:04:01 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:50.221 21:04:01 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.221 21:04:01 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:50.221 21:04:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:50.221 21:04:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:50.221 21:04:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.221 21:04:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:50.221 21:04:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:50.479 21:04:01 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:50.479 21:04:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:50.479 21:04:01 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.737 Waiting for block devices as requested 00:04:50.737 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:50.737 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:50.996 21:04:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:50.996 21:04:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:50.996 21:04:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:50.996 21:04:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:50.996 21:04:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:50.996 21:04:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1557 -- # continue 00:04:50.996 21:04:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:50.996 21:04:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:50.996 21:04:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:50.996 21:04:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:50.996 21:04:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:50.996 21:04:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:50.996 21:04:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:50.996 21:04:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:50.996 21:04:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:50.996 21:04:02 -- common/autotest_common.sh@1557 -- # continue 00:04:50.996 21:04:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:50.996 21:04:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.996 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:50.996 21:04:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:50.996 21:04:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.996 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:50.996 21:04:02 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.821 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.821 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.821 21:04:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:51.821 21:04:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.821 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:51.821 21:04:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:51.821 21:04:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:51.821 21:04:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:51.821 21:04:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:51.821 21:04:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:52.105 21:04:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:52.105 21:04:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:52.105 21:04:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:52.105 21:04:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.105 21:04:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.105 21:04:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:52.105 21:04:03 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:52.105 21:04:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:52.105 21:04:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:52.105 21:04:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:52.105 21:04:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:52.105 21:04:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:52.105 21:04:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:52.105 21:04:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:52.105 21:04:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:52.105 21:04:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:52.105 21:04:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:52.105 21:04:03 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:52.105 21:04:03 -- common/autotest_common.sh@1593 -- # return 0 00:04:52.105 21:04:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:52.105 21:04:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:52.105 21:04:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.105 21:04:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.105 21:04:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:52.105 21:04:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.105 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.105 21:04:03 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:52.105 21:04:03 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:52.105 21:04:03 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:52.105 21:04:03 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:52.105 21:04:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.105 21:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.105 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:52.105 ************************************ 00:04:52.105 START TEST env 00:04:52.105 ************************************ 00:04:52.105 21:04:03 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:52.105 * Looking for test storage... 00:04:52.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:52.105 21:04:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:52.105 21:04:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.105 21:04:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.105 21:04:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.105 ************************************ 00:04:52.105 START TEST env_memory 00:04:52.105 ************************************ 00:04:52.105 21:04:03 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:52.105 00:04:52.105 00:04:52.105 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.105 http://cunit.sourceforge.net/ 00:04:52.105 00:04:52.105 00:04:52.105 Suite: memory 00:04:52.105 Test: alloc and free memory map ...[2024-07-14 21:04:03.625430] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:52.363 passed 00:04:52.363 Test: mem map translation ...[2024-07-14 21:04:03.686296] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:52.363 [2024-07-14 21:04:03.686379] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:52.363 [2024-07-14 21:04:03.686481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:52.363 [2024-07-14 21:04:03.686512] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:52.363 passed 00:04:52.363 Test: mem map registration ...[2024-07-14 21:04:03.784524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:52.363 [2024-07-14 21:04:03.784596] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:52.363 passed 00:04:52.621 Test: mem map adjacent registrations ...passed 00:04:52.621 00:04:52.621 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.621 suites 1 1 n/a 0 0 00:04:52.621 tests 4 4 4 0 0 00:04:52.621 asserts 152 152 152 0 n/a 00:04:52.621 00:04:52.621 Elapsed time = 0.347 seconds 00:04:52.621 00:04:52.621 real 0m0.392s 00:04:52.621 user 0m0.351s 00:04:52.621 sys 0m0.034s 00:04:52.621 21:04:03 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.621 21:04:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:52.621 ************************************ 00:04:52.621 END TEST env_memory 00:04:52.621 ************************************ 00:04:52.621 21:04:03 env -- common/autotest_common.sh@1142 -- # return 0 00:04:52.621 21:04:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:52.621 21:04:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.621 21:04:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.621 21:04:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.621 ************************************ 00:04:52.621 START TEST env_vtophys 00:04:52.621 ************************************ 00:04:52.621 21:04:03 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:52.621 EAL: lib.eal log level changed from notice to debug 00:04:52.621 EAL: Detected lcore 0 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 1 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 2 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 3 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 4 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 5 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 6 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 7 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 8 as core 0 on socket 0 00:04:52.621 EAL: Detected lcore 9 as core 0 on socket 0 00:04:52.621 EAL: Maximum logical cores by configuration: 128 00:04:52.621 EAL: Detected CPU lcores: 10 00:04:52.621 EAL: Detected NUMA nodes: 1 00:04:52.621 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:52.621 EAL: Detected shared linkage of DPDK 00:04:52.621 EAL: No shared files mode enabled, IPC will be disabled 00:04:52.621 EAL: Selected IOVA mode 'PA' 00:04:52.621 EAL: Probing VFIO support... 00:04:52.621 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:52.621 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:52.621 EAL: Ask a virtual area of 0x2e000 bytes 00:04:52.621 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:52.621 EAL: Setting up physically contiguous memory... 00:04:52.621 EAL: Setting maximum number of open files to 524288 00:04:52.621 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:52.621 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:52.621 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.621 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:52.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.621 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.621 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:52.621 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:52.621 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.621 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:52.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.621 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.621 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:52.621 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:52.621 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.621 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:52.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.621 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.621 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:52.621 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:52.621 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.621 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:52.621 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.621 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.621 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:52.621 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:52.621 EAL: Hugepages will be freed exactly as allocated. 00:04:52.621 EAL: No shared files mode enabled, IPC is disabled 00:04:52.621 EAL: No shared files mode enabled, IPC is disabled 00:04:52.880 EAL: TSC frequency is ~2200000 KHz 00:04:52.880 EAL: Main lcore 0 is ready (tid=7f249b41ba40;cpuset=[0]) 00:04:52.880 EAL: Trying to obtain current memory policy. 00:04:52.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.880 EAL: Restoring previous memory policy: 0 00:04:52.880 EAL: request: mp_malloc_sync 00:04:52.880 EAL: No shared files mode enabled, IPC is disabled 00:04:52.880 EAL: Heap on socket 0 was expanded by 2MB 00:04:52.880 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:52.880 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:52.880 EAL: Mem event callback 'spdk:(nil)' registered 00:04:52.880 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:52.880 00:04:52.880 00:04:52.880 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.880 http://cunit.sourceforge.net/ 00:04:52.880 00:04:52.880 00:04:52.880 Suite: components_suite 00:04:53.174 Test: vtophys_malloc_test ...passed 00:04:53.174 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.174 EAL: Restoring previous memory policy: 4 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.174 EAL: Trying to obtain current memory policy. 00:04:53.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.174 EAL: Restoring previous memory policy: 4 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.174 EAL: Trying to obtain current memory policy. 00:04:53.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.174 EAL: Restoring previous memory policy: 4 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.174 EAL: Trying to obtain current memory policy. 00:04:53.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.174 EAL: Restoring previous memory policy: 4 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.174 EAL: Trying to obtain current memory policy. 00:04:53.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.174 EAL: Restoring previous memory policy: 4 00:04:53.174 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.174 EAL: request: mp_malloc_sync 00:04:53.174 EAL: No shared files mode enabled, IPC is disabled 00:04:53.174 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.450 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.450 EAL: request: mp_malloc_sync 00:04:53.450 EAL: No shared files mode enabled, IPC is disabled 00:04:53.450 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.450 EAL: Trying to obtain current memory policy. 00:04:53.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.450 EAL: Restoring previous memory policy: 4 00:04:53.450 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.450 EAL: request: mp_malloc_sync 00:04:53.450 EAL: No shared files mode enabled, IPC is disabled 00:04:53.450 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.450 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.450 EAL: request: mp_malloc_sync 00:04:53.450 EAL: No shared files mode enabled, IPC is disabled 00:04:53.450 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.450 EAL: Trying to obtain current memory policy. 00:04:53.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.709 EAL: Restoring previous memory policy: 4 00:04:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.709 EAL: request: mp_malloc_sync 00:04:53.709 EAL: No shared files mode enabled, IPC is disabled 00:04:53.709 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.709 EAL: request: mp_malloc_sync 00:04:53.709 EAL: No shared files mode enabled, IPC is disabled 00:04:53.709 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.967 EAL: Trying to obtain current memory policy. 00:04:53.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.967 EAL: Restoring previous memory policy: 4 00:04:53.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.967 EAL: request: mp_malloc_sync 00:04:53.967 EAL: No shared files mode enabled, IPC is disabled 00:04:53.967 EAL: Heap on socket 0 was expanded by 258MB 00:04:54.534 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.534 EAL: request: mp_malloc_sync 00:04:54.534 EAL: No shared files mode enabled, IPC is disabled 00:04:54.534 EAL: Heap on socket 0 was shrunk by 258MB 00:04:54.792 EAL: Trying to obtain current memory policy. 00:04:54.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.792 EAL: Restoring previous memory policy: 4 00:04:54.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.792 EAL: request: mp_malloc_sync 00:04:54.792 EAL: No shared files mode enabled, IPC is disabled 00:04:54.792 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.730 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.730 EAL: request: mp_malloc_sync 00:04:55.730 EAL: No shared files mode enabled, IPC is disabled 00:04:55.730 EAL: Heap on socket 0 was shrunk by 514MB 00:04:56.311 EAL: Trying to obtain current memory policy. 00:04:56.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.311 EAL: Restoring previous memory policy: 4 00:04:56.311 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.311 EAL: request: mp_malloc_sync 00:04:56.311 EAL: No shared files mode enabled, IPC is disabled 00:04:56.311 EAL: Heap on socket 0 was expanded by 1026MB 00:04:58.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.221 EAL: request: mp_malloc_sync 00:04:58.221 EAL: No shared files mode enabled, IPC is disabled 00:04:58.221 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.157 passed 00:04:59.157 00:04:59.157 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.157 suites 1 1 n/a 0 0 00:04:59.157 tests 2 2 2 0 0 00:04:59.157 asserts 5334 5334 5334 0 n/a 00:04:59.157 00:04:59.157 Elapsed time = 6.300 seconds 00:04:59.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.157 EAL: request: mp_malloc_sync 00:04:59.157 EAL: No shared files mode enabled, IPC is disabled 00:04:59.157 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.157 EAL: No shared files mode enabled, IPC is disabled 00:04:59.157 EAL: No shared files mode enabled, IPC is disabled 00:04:59.157 EAL: No shared files mode enabled, IPC is disabled 00:04:59.157 00:04:59.157 real 0m6.616s 00:04:59.157 user 0m5.738s 00:04:59.157 sys 0m0.706s 00:04:59.157 ************************************ 00:04:59.157 END TEST env_vtophys 00:04:59.157 ************************************ 00:04:59.157 21:04:10 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.157 21:04:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.157 21:04:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.157 21:04:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:59.157 21:04:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.157 21:04:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.157 21:04:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.157 ************************************ 00:04:59.157 START TEST env_pci 00:04:59.157 ************************************ 00:04:59.157 21:04:10 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:59.157 00:04:59.157 00:04:59.157 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.157 http://cunit.sourceforge.net/ 00:04:59.157 00:04:59.157 00:04:59.157 Suite: pci 00:04:59.157 Test: pci_hook ...[2024-07-14 21:04:10.695825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59376 has claimed it 00:04:59.416 EAL: Cannot find device (10000:00:01.0) 00:04:59.416 EAL: Failed to attach device on primary process 00:04:59.416 passed 00:04:59.416 00:04:59.416 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.416 suites 1 1 n/a 0 0 00:04:59.416 tests 1 1 1 0 0 00:04:59.416 asserts 25 25 25 0 n/a 00:04:59.416 00:04:59.416 Elapsed time = 0.006 seconds 00:04:59.416 00:04:59.416 real 0m0.076s 00:04:59.416 user 0m0.040s 00:04:59.416 sys 0m0.035s 00:04:59.416 21:04:10 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.416 21:04:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:59.416 ************************************ 00:04:59.416 END TEST env_pci 00:04:59.416 ************************************ 00:04:59.416 21:04:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.416 21:04:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.416 21:04:10 env -- env/env.sh@15 -- # uname 00:04:59.416 21:04:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.416 21:04:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.416 21:04:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.416 21:04:10 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:59.416 21:04:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.416 21:04:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.416 ************************************ 00:04:59.416 START TEST env_dpdk_post_init 00:04:59.416 ************************************ 00:04:59.416 21:04:10 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.416 EAL: Detected CPU lcores: 10 00:04:59.416 EAL: Detected NUMA nodes: 1 00:04:59.416 EAL: Detected shared linkage of DPDK 00:04:59.416 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.416 EAL: Selected IOVA mode 'PA' 00:04:59.675 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.675 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:59.675 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:59.675 Starting DPDK initialization... 00:04:59.675 Starting SPDK post initialization... 00:04:59.675 SPDK NVMe probe 00:04:59.675 Attaching to 0000:00:10.0 00:04:59.675 Attaching to 0000:00:11.0 00:04:59.675 Attached to 0000:00:10.0 00:04:59.675 Attached to 0000:00:11.0 00:04:59.675 Cleaning up... 00:04:59.675 00:04:59.675 real 0m0.257s 00:04:59.675 user 0m0.078s 00:04:59.675 sys 0m0.082s 00:04:59.675 21:04:11 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.675 21:04:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.675 ************************************ 00:04:59.675 END TEST env_dpdk_post_init 00:04:59.675 ************************************ 00:04:59.675 21:04:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.675 21:04:11 env -- env/env.sh@26 -- # uname 00:04:59.675 21:04:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.675 21:04:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.675 21:04:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.675 21:04:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.675 21:04:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.675 ************************************ 00:04:59.675 START TEST env_mem_callbacks 00:04:59.675 ************************************ 00:04:59.675 21:04:11 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.675 EAL: Detected CPU lcores: 10 00:04:59.675 EAL: Detected NUMA nodes: 1 00:04:59.675 EAL: Detected shared linkage of DPDK 00:04:59.675 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.675 EAL: Selected IOVA mode 'PA' 00:04:59.934 00:04:59.934 00:04:59.934 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.934 http://cunit.sourceforge.net/ 00:04:59.934 00:04:59.934 00:04:59.934 Suite: memory 00:04:59.934 Test: test ... 00:04:59.934 register 0x200000200000 2097152 00:04:59.934 malloc 3145728 00:04:59.934 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.934 register 0x200000400000 4194304 00:04:59.934 buf 0x2000004fffc0 len 3145728 PASSED 00:04:59.934 malloc 64 00:04:59.934 buf 0x2000004ffec0 len 64 PASSED 00:04:59.934 malloc 4194304 00:04:59.934 register 0x200000800000 6291456 00:04:59.934 buf 0x2000009fffc0 len 4194304 PASSED 00:04:59.934 free 0x2000004fffc0 3145728 00:04:59.934 free 0x2000004ffec0 64 00:04:59.934 unregister 0x200000400000 4194304 PASSED 00:04:59.934 free 0x2000009fffc0 4194304 00:04:59.934 unregister 0x200000800000 6291456 PASSED 00:04:59.934 malloc 8388608 00:04:59.934 register 0x200000400000 10485760 00:04:59.934 buf 0x2000005fffc0 len 8388608 PASSED 00:04:59.934 free 0x2000005fffc0 8388608 00:04:59.934 unregister 0x200000400000 10485760 PASSED 00:04:59.934 passed 00:04:59.934 00:04:59.934 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.934 suites 1 1 n/a 0 0 00:04:59.934 tests 1 1 1 0 0 00:04:59.934 asserts 15 15 15 0 n/a 00:04:59.934 00:04:59.934 Elapsed time = 0.053 seconds 00:04:59.934 00:04:59.934 real 0m0.253s 00:04:59.934 user 0m0.085s 00:04:59.934 sys 0m0.064s 00:04:59.934 21:04:11 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.934 21:04:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.934 ************************************ 00:04:59.934 END TEST env_mem_callbacks 00:04:59.934 ************************************ 00:04:59.934 21:04:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.934 00:04:59.934 real 0m7.949s 00:04:59.934 user 0m6.410s 00:04:59.934 sys 0m1.135s 00:04:59.934 ************************************ 00:04:59.934 END TEST env 00:04:59.934 ************************************ 00:04:59.934 21:04:11 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.934 21:04:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.934 21:04:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.934 21:04:11 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:59.934 21:04:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.934 21:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.934 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:04:59.934 ************************************ 00:04:59.934 START TEST rpc 00:04:59.934 ************************************ 00:04:59.934 21:04:11 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.194 * Looking for test storage... 00:05:00.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.194 21:04:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59495 00:05:00.194 21:04:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.194 21:04:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:00.194 21:04:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59495 00:05:00.194 21:04:11 rpc -- common/autotest_common.sh@829 -- # '[' -z 59495 ']' 00:05:00.194 21:04:11 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.194 21:04:11 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.194 21:04:11 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.194 21:04:11 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.194 21:04:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.194 [2024-07-14 21:04:11.683327] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:00.194 [2024-07-14 21:04:11.683726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59495 ] 00:05:00.452 [2024-07-14 21:04:11.854797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.710 [2024-07-14 21:04:12.018614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:00.710 [2024-07-14 21:04:12.018899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59495' to capture a snapshot of events at runtime. 00:05:00.710 [2024-07-14 21:04:12.019054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.710 [2024-07-14 21:04:12.019189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.710 [2024-07-14 21:04:12.019251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59495 for offline analysis/debug. 00:05:00.710 [2024-07-14 21:04:12.019409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.710 [2024-07-14 21:04:12.166408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.278 21:04:12 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.278 21:04:12 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.278 21:04:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.278 21:04:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.278 21:04:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:01.278 21:04:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:01.278 21:04:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.278 21:04:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.278 21:04:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.278 ************************************ 00:05:01.278 START TEST rpc_integrity 00:05:01.278 ************************************ 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.278 { 00:05:01.278 "name": "Malloc0", 00:05:01.278 "aliases": [ 00:05:01.278 "e9333efe-b9f8-4819-bd27-69508b1ddc75" 00:05:01.278 ], 00:05:01.278 "product_name": "Malloc disk", 00:05:01.278 "block_size": 512, 00:05:01.278 "num_blocks": 16384, 00:05:01.278 "uuid": "e9333efe-b9f8-4819-bd27-69508b1ddc75", 00:05:01.278 "assigned_rate_limits": { 00:05:01.278 "rw_ios_per_sec": 0, 00:05:01.278 "rw_mbytes_per_sec": 0, 00:05:01.278 "r_mbytes_per_sec": 0, 00:05:01.278 "w_mbytes_per_sec": 0 00:05:01.278 }, 00:05:01.278 "claimed": false, 00:05:01.278 "zoned": false, 00:05:01.278 "supported_io_types": { 00:05:01.278 "read": true, 00:05:01.278 "write": true, 00:05:01.278 "unmap": true, 00:05:01.278 "flush": true, 00:05:01.278 "reset": true, 00:05:01.278 "nvme_admin": false, 00:05:01.278 "nvme_io": false, 00:05:01.278 "nvme_io_md": false, 00:05:01.278 "write_zeroes": true, 00:05:01.278 "zcopy": true, 00:05:01.278 "get_zone_info": false, 00:05:01.278 "zone_management": false, 00:05:01.278 "zone_append": false, 00:05:01.278 "compare": false, 00:05:01.278 "compare_and_write": false, 00:05:01.278 "abort": true, 00:05:01.278 "seek_hole": false, 00:05:01.278 "seek_data": false, 00:05:01.278 "copy": true, 00:05:01.278 "nvme_iov_md": false 00:05:01.278 }, 00:05:01.278 "memory_domains": [ 00:05:01.278 { 00:05:01.278 "dma_device_id": "system", 00:05:01.278 "dma_device_type": 1 00:05:01.278 }, 00:05:01.278 { 00:05:01.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.278 "dma_device_type": 2 00:05:01.278 } 00:05:01.278 ], 00:05:01.278 "driver_specific": {} 00:05:01.278 } 00:05:01.278 ]' 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.278 [2024-07-14 21:04:12.764295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:01.278 [2024-07-14 21:04:12.764360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.278 [2024-07-14 21:04:12.764442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:01.278 [2024-07-14 21:04:12.764463] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.278 [2024-07-14 21:04:12.766935] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.278 [2024-07-14 21:04:12.766989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.278 Passthru0 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.278 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.278 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.278 { 00:05:01.278 "name": "Malloc0", 00:05:01.278 "aliases": [ 00:05:01.278 "e9333efe-b9f8-4819-bd27-69508b1ddc75" 00:05:01.278 ], 00:05:01.278 "product_name": "Malloc disk", 00:05:01.278 "block_size": 512, 00:05:01.278 "num_blocks": 16384, 00:05:01.278 "uuid": "e9333efe-b9f8-4819-bd27-69508b1ddc75", 00:05:01.278 "assigned_rate_limits": { 00:05:01.278 "rw_ios_per_sec": 0, 00:05:01.278 "rw_mbytes_per_sec": 0, 00:05:01.278 "r_mbytes_per_sec": 0, 00:05:01.278 "w_mbytes_per_sec": 0 00:05:01.278 }, 00:05:01.278 "claimed": true, 00:05:01.278 "claim_type": "exclusive_write", 00:05:01.278 "zoned": false, 00:05:01.278 "supported_io_types": { 00:05:01.278 "read": true, 00:05:01.278 "write": true, 00:05:01.278 "unmap": true, 00:05:01.278 "flush": true, 00:05:01.278 "reset": true, 00:05:01.278 "nvme_admin": false, 00:05:01.278 "nvme_io": false, 00:05:01.278 "nvme_io_md": false, 00:05:01.278 "write_zeroes": true, 00:05:01.278 "zcopy": true, 00:05:01.278 "get_zone_info": false, 00:05:01.278 "zone_management": false, 00:05:01.278 "zone_append": false, 00:05:01.278 "compare": false, 00:05:01.278 "compare_and_write": false, 00:05:01.278 "abort": true, 00:05:01.278 "seek_hole": false, 00:05:01.278 "seek_data": false, 00:05:01.278 "copy": true, 00:05:01.278 "nvme_iov_md": false 00:05:01.278 }, 00:05:01.278 "memory_domains": [ 00:05:01.278 { 00:05:01.278 "dma_device_id": "system", 00:05:01.278 "dma_device_type": 1 00:05:01.278 }, 00:05:01.278 { 00:05:01.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.279 "dma_device_type": 2 00:05:01.279 } 00:05:01.279 ], 00:05:01.279 "driver_specific": {} 00:05:01.279 }, 00:05:01.279 { 00:05:01.279 "name": "Passthru0", 00:05:01.279 "aliases": [ 00:05:01.279 "c04b45ea-ddb4-5cac-98e9-230d397ffdef" 00:05:01.279 ], 00:05:01.279 "product_name": "passthru", 00:05:01.279 "block_size": 512, 00:05:01.279 "num_blocks": 16384, 00:05:01.279 "uuid": "c04b45ea-ddb4-5cac-98e9-230d397ffdef", 00:05:01.279 "assigned_rate_limits": { 00:05:01.279 "rw_ios_per_sec": 0, 00:05:01.279 "rw_mbytes_per_sec": 0, 00:05:01.279 "r_mbytes_per_sec": 0, 00:05:01.279 "w_mbytes_per_sec": 0 00:05:01.279 }, 00:05:01.279 "claimed": false, 00:05:01.279 "zoned": false, 00:05:01.279 "supported_io_types": { 00:05:01.279 "read": true, 00:05:01.279 "write": true, 00:05:01.279 "unmap": true, 00:05:01.279 "flush": true, 00:05:01.279 "reset": true, 00:05:01.279 "nvme_admin": false, 00:05:01.279 "nvme_io": false, 00:05:01.279 "nvme_io_md": false, 00:05:01.279 "write_zeroes": true, 00:05:01.279 "zcopy": true, 00:05:01.279 "get_zone_info": false, 00:05:01.279 "zone_management": false, 00:05:01.279 "zone_append": false, 00:05:01.279 "compare": false, 00:05:01.279 "compare_and_write": false, 00:05:01.279 "abort": true, 00:05:01.279 "seek_hole": false, 00:05:01.279 "seek_data": false, 00:05:01.279 "copy": true, 00:05:01.279 "nvme_iov_md": false 00:05:01.279 }, 00:05:01.279 "memory_domains": [ 00:05:01.279 { 00:05:01.279 "dma_device_id": "system", 00:05:01.279 "dma_device_type": 1 00:05:01.279 }, 00:05:01.279 { 00:05:01.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.279 "dma_device_type": 2 00:05:01.279 } 00:05:01.279 ], 00:05:01.279 "driver_specific": { 00:05:01.279 "passthru": { 00:05:01.279 "name": "Passthru0", 00:05:01.279 "base_bdev_name": "Malloc0" 00:05:01.279 } 00:05:01.279 } 00:05:01.279 } 00:05:01.279 ]' 00:05:01.279 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.538 ************************************ 00:05:01.538 END TEST rpc_integrity 00:05:01.538 ************************************ 00:05:01.538 21:04:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.538 00:05:01.538 real 0m0.348s 00:05:01.538 user 0m0.215s 00:05:01.538 sys 0m0.042s 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.538 21:04:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 21:04:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.538 21:04:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:01.538 21:04:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.538 21:04:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.538 21:04:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 ************************************ 00:05:01.538 START TEST rpc_plugins 00:05:01.538 ************************************ 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:01.538 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.538 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:01.538 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.538 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.538 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:01.538 { 00:05:01.538 "name": "Malloc1", 00:05:01.538 "aliases": [ 00:05:01.538 "7056c8b0-0831-4a7b-b043-7e84d90f0ea8" 00:05:01.538 ], 00:05:01.538 "product_name": "Malloc disk", 00:05:01.538 "block_size": 4096, 00:05:01.538 "num_blocks": 256, 00:05:01.538 "uuid": "7056c8b0-0831-4a7b-b043-7e84d90f0ea8", 00:05:01.538 "assigned_rate_limits": { 00:05:01.538 "rw_ios_per_sec": 0, 00:05:01.538 "rw_mbytes_per_sec": 0, 00:05:01.538 "r_mbytes_per_sec": 0, 00:05:01.538 "w_mbytes_per_sec": 0 00:05:01.538 }, 00:05:01.538 "claimed": false, 00:05:01.538 "zoned": false, 00:05:01.538 "supported_io_types": { 00:05:01.538 "read": true, 00:05:01.538 "write": true, 00:05:01.538 "unmap": true, 00:05:01.538 "flush": true, 00:05:01.538 "reset": true, 00:05:01.538 "nvme_admin": false, 00:05:01.538 "nvme_io": false, 00:05:01.538 "nvme_io_md": false, 00:05:01.538 "write_zeroes": true, 00:05:01.538 "zcopy": true, 00:05:01.538 "get_zone_info": false, 00:05:01.538 "zone_management": false, 00:05:01.538 "zone_append": false, 00:05:01.538 "compare": false, 00:05:01.538 "compare_and_write": false, 00:05:01.538 "abort": true, 00:05:01.538 "seek_hole": false, 00:05:01.538 "seek_data": false, 00:05:01.538 "copy": true, 00:05:01.538 "nvme_iov_md": false 00:05:01.538 }, 00:05:01.538 "memory_domains": [ 00:05:01.538 { 00:05:01.538 "dma_device_id": "system", 00:05:01.538 "dma_device_type": 1 00:05:01.538 }, 00:05:01.538 { 00:05:01.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.538 "dma_device_type": 2 00:05:01.538 } 00:05:01.538 ], 00:05:01.538 "driver_specific": {} 00:05:01.538 } 00:05:01.538 ]' 00:05:01.538 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:01.797 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:01.798 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.798 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.798 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:01.798 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:01.798 ************************************ 00:05:01.798 END TEST rpc_plugins 00:05:01.798 ************************************ 00:05:01.798 21:04:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:01.798 00:05:01.798 real 0m0.172s 00:05:01.798 user 0m0.110s 00:05:01.798 sys 0m0.023s 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.798 21:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.798 21:04:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.798 21:04:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:01.798 21:04:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.798 21:04:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.798 21:04:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.798 ************************************ 00:05:01.798 START TEST rpc_trace_cmd_test 00:05:01.798 ************************************ 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:01.798 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59495", 00:05:01.798 "tpoint_group_mask": "0x8", 00:05:01.798 "iscsi_conn": { 00:05:01.798 "mask": "0x2", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "scsi": { 00:05:01.798 "mask": "0x4", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "bdev": { 00:05:01.798 "mask": "0x8", 00:05:01.798 "tpoint_mask": "0xffffffffffffffff" 00:05:01.798 }, 00:05:01.798 "nvmf_rdma": { 00:05:01.798 "mask": "0x10", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "nvmf_tcp": { 00:05:01.798 "mask": "0x20", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "ftl": { 00:05:01.798 "mask": "0x40", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "blobfs": { 00:05:01.798 "mask": "0x80", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "dsa": { 00:05:01.798 "mask": "0x200", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "thread": { 00:05:01.798 "mask": "0x400", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "nvme_pcie": { 00:05:01.798 "mask": "0x800", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "iaa": { 00:05:01.798 "mask": "0x1000", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "nvme_tcp": { 00:05:01.798 "mask": "0x2000", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "bdev_nvme": { 00:05:01.798 "mask": "0x4000", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 }, 00:05:01.798 "sock": { 00:05:01.798 "mask": "0x8000", 00:05:01.798 "tpoint_mask": "0x0" 00:05:01.798 } 00:05:01.798 }' 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:01.798 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.056 ************************************ 00:05:02.056 END TEST rpc_trace_cmd_test 00:05:02.056 ************************************ 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.056 00:05:02.056 real 0m0.275s 00:05:02.056 user 0m0.240s 00:05:02.056 sys 0m0.024s 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.056 21:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.056 21:04:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:02.056 21:04:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:02.056 21:04:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:02.056 21:04:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:02.056 21:04:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.056 21:04:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.056 21:04:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.056 ************************************ 00:05:02.056 START TEST rpc_daemon_integrity 00:05:02.056 ************************************ 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.056 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.314 { 00:05:02.314 "name": "Malloc2", 00:05:02.314 "aliases": [ 00:05:02.314 "75eed131-1fd1-4c8c-908b-3f58a68a17e2" 00:05:02.314 ], 00:05:02.314 "product_name": "Malloc disk", 00:05:02.314 "block_size": 512, 00:05:02.314 "num_blocks": 16384, 00:05:02.314 "uuid": "75eed131-1fd1-4c8c-908b-3f58a68a17e2", 00:05:02.314 "assigned_rate_limits": { 00:05:02.314 "rw_ios_per_sec": 0, 00:05:02.314 "rw_mbytes_per_sec": 0, 00:05:02.314 "r_mbytes_per_sec": 0, 00:05:02.314 "w_mbytes_per_sec": 0 00:05:02.314 }, 00:05:02.314 "claimed": false, 00:05:02.314 "zoned": false, 00:05:02.314 "supported_io_types": { 00:05:02.314 "read": true, 00:05:02.314 "write": true, 00:05:02.314 "unmap": true, 00:05:02.314 "flush": true, 00:05:02.314 "reset": true, 00:05:02.314 "nvme_admin": false, 00:05:02.314 "nvme_io": false, 00:05:02.314 "nvme_io_md": false, 00:05:02.314 "write_zeroes": true, 00:05:02.314 "zcopy": true, 00:05:02.314 "get_zone_info": false, 00:05:02.314 "zone_management": false, 00:05:02.314 "zone_append": false, 00:05:02.314 "compare": false, 00:05:02.314 "compare_and_write": false, 00:05:02.314 "abort": true, 00:05:02.314 "seek_hole": false, 00:05:02.314 "seek_data": false, 00:05:02.314 "copy": true, 00:05:02.314 "nvme_iov_md": false 00:05:02.314 }, 00:05:02.314 "memory_domains": [ 00:05:02.314 { 00:05:02.314 "dma_device_id": "system", 00:05:02.314 "dma_device_type": 1 00:05:02.314 }, 00:05:02.314 { 00:05:02.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.314 "dma_device_type": 2 00:05:02.314 } 00:05:02.314 ], 00:05:02.314 "driver_specific": {} 00:05:02.314 } 00:05:02.314 ]' 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 [2024-07-14 21:04:13.707023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:02.314 [2024-07-14 21:04:13.707085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.314 [2024-07-14 21:04:13.707126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:05:02.314 [2024-07-14 21:04:13.707172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.314 [2024-07-14 21:04:13.709487] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.314 [2024-07-14 21:04:13.709546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.314 Passthru0 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.314 { 00:05:02.314 "name": "Malloc2", 00:05:02.314 "aliases": [ 00:05:02.314 "75eed131-1fd1-4c8c-908b-3f58a68a17e2" 00:05:02.314 ], 00:05:02.314 "product_name": "Malloc disk", 00:05:02.314 "block_size": 512, 00:05:02.314 "num_blocks": 16384, 00:05:02.314 "uuid": "75eed131-1fd1-4c8c-908b-3f58a68a17e2", 00:05:02.314 "assigned_rate_limits": { 00:05:02.314 "rw_ios_per_sec": 0, 00:05:02.314 "rw_mbytes_per_sec": 0, 00:05:02.314 "r_mbytes_per_sec": 0, 00:05:02.314 "w_mbytes_per_sec": 0 00:05:02.314 }, 00:05:02.314 "claimed": true, 00:05:02.314 "claim_type": "exclusive_write", 00:05:02.314 "zoned": false, 00:05:02.314 "supported_io_types": { 00:05:02.314 "read": true, 00:05:02.314 "write": true, 00:05:02.314 "unmap": true, 00:05:02.314 "flush": true, 00:05:02.314 "reset": true, 00:05:02.314 "nvme_admin": false, 00:05:02.314 "nvme_io": false, 00:05:02.314 "nvme_io_md": false, 00:05:02.314 "write_zeroes": true, 00:05:02.314 "zcopy": true, 00:05:02.314 "get_zone_info": false, 00:05:02.314 "zone_management": false, 00:05:02.314 "zone_append": false, 00:05:02.314 "compare": false, 00:05:02.314 "compare_and_write": false, 00:05:02.314 "abort": true, 00:05:02.314 "seek_hole": false, 00:05:02.314 "seek_data": false, 00:05:02.314 "copy": true, 00:05:02.314 "nvme_iov_md": false 00:05:02.314 }, 00:05:02.314 "memory_domains": [ 00:05:02.314 { 00:05:02.314 "dma_device_id": "system", 00:05:02.314 "dma_device_type": 1 00:05:02.314 }, 00:05:02.314 { 00:05:02.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.314 "dma_device_type": 2 00:05:02.314 } 00:05:02.314 ], 00:05:02.314 "driver_specific": {} 00:05:02.314 }, 00:05:02.314 { 00:05:02.314 "name": "Passthru0", 00:05:02.314 "aliases": [ 00:05:02.314 "5da4b33e-663d-5eb1-bf45-bc227e9db7a3" 00:05:02.314 ], 00:05:02.314 "product_name": "passthru", 00:05:02.314 "block_size": 512, 00:05:02.314 "num_blocks": 16384, 00:05:02.314 "uuid": "5da4b33e-663d-5eb1-bf45-bc227e9db7a3", 00:05:02.314 "assigned_rate_limits": { 00:05:02.314 "rw_ios_per_sec": 0, 00:05:02.314 "rw_mbytes_per_sec": 0, 00:05:02.314 "r_mbytes_per_sec": 0, 00:05:02.314 "w_mbytes_per_sec": 0 00:05:02.314 }, 00:05:02.314 "claimed": false, 00:05:02.314 "zoned": false, 00:05:02.314 "supported_io_types": { 00:05:02.314 "read": true, 00:05:02.314 "write": true, 00:05:02.314 "unmap": true, 00:05:02.314 "flush": true, 00:05:02.314 "reset": true, 00:05:02.314 "nvme_admin": false, 00:05:02.314 "nvme_io": false, 00:05:02.314 "nvme_io_md": false, 00:05:02.314 "write_zeroes": true, 00:05:02.314 "zcopy": true, 00:05:02.314 "get_zone_info": false, 00:05:02.314 "zone_management": false, 00:05:02.314 "zone_append": false, 00:05:02.314 "compare": false, 00:05:02.314 "compare_and_write": false, 00:05:02.314 "abort": true, 00:05:02.314 "seek_hole": false, 00:05:02.314 "seek_data": false, 00:05:02.314 "copy": true, 00:05:02.314 "nvme_iov_md": false 00:05:02.314 }, 00:05:02.314 "memory_domains": [ 00:05:02.314 { 00:05:02.314 "dma_device_id": "system", 00:05:02.314 "dma_device_type": 1 00:05:02.314 }, 00:05:02.314 { 00:05:02.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.314 "dma_device_type": 2 00:05:02.314 } 00:05:02.314 ], 00:05:02.314 "driver_specific": { 00:05:02.314 "passthru": { 00:05:02.314 "name": "Passthru0", 00:05:02.314 "base_bdev_name": "Malloc2" 00:05:02.314 } 00:05:02.314 } 00:05:02.314 } 00:05:02.314 ]' 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.314 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.572 ************************************ 00:05:02.572 END TEST rpc_daemon_integrity 00:05:02.572 ************************************ 00:05:02.572 21:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.572 00:05:02.572 real 0m0.345s 00:05:02.572 user 0m0.219s 00:05:02.572 sys 0m0.040s 00:05:02.572 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.572 21:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.572 21:04:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:02.572 21:04:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:02.572 21:04:13 rpc -- rpc/rpc.sh@84 -- # killprocess 59495 00:05:02.572 21:04:13 rpc -- common/autotest_common.sh@948 -- # '[' -z 59495 ']' 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@952 -- # kill -0 59495 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@953 -- # uname 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59495 00:05:02.573 killing process with pid 59495 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59495' 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@967 -- # kill 59495 00:05:02.573 21:04:13 rpc -- common/autotest_common.sh@972 -- # wait 59495 00:05:04.475 00:05:04.475 real 0m4.348s 00:05:04.475 user 0m5.187s 00:05:04.475 sys 0m0.679s 00:05:04.475 ************************************ 00:05:04.475 END TEST rpc 00:05:04.475 ************************************ 00:05:04.475 21:04:15 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.475 21:04:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.475 21:04:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.475 21:04:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.475 21:04:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.475 21:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.475 21:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.475 ************************************ 00:05:04.475 START TEST skip_rpc 00:05:04.475 ************************************ 00:05:04.475 21:04:15 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.475 * Looking for test storage... 00:05:04.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.475 21:04:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.475 21:04:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.475 21:04:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.475 21:04:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.475 21:04:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.475 21:04:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.475 ************************************ 00:05:04.475 START TEST skip_rpc 00:05:04.475 ************************************ 00:05:04.475 21:04:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:04.475 21:04:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59705 00:05:04.475 21:04:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.475 21:04:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.475 21:04:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.734 [2024-07-14 21:04:16.068626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:04.734 [2024-07-14 21:04:16.068816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:05:04.734 [2024-07-14 21:04:16.240190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.993 [2024-07-14 21:04:16.421076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.251 [2024-07-14 21:04:16.602535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59705 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59705 ']' 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59705 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59705 00:05:09.462 killing process with pid 59705 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59705' 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59705 00:05:09.462 21:04:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59705 00:05:11.364 ************************************ 00:05:11.364 END TEST skip_rpc 00:05:11.364 ************************************ 00:05:11.364 00:05:11.364 real 0m6.858s 00:05:11.364 user 0m6.439s 00:05:11.364 sys 0m0.310s 00:05:11.364 21:04:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.364 21:04:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.364 21:04:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.364 21:04:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:11.364 21:04:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.364 21:04:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.364 21:04:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.364 ************************************ 00:05:11.364 START TEST skip_rpc_with_json 00:05:11.364 ************************************ 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59809 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59809 00:05:11.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59809 ']' 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.364 21:04:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.622 [2024-07-14 21:04:22.983557] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:11.622 [2024-07-14 21:04:22.983734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:05:11.623 [2024-07-14 21:04:23.152026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.880 [2024-07-14 21:04:23.326902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.136 [2024-07-14 21:04:23.497240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.394 [2024-07-14 21:04:23.931429] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.394 request: 00:05:12.394 { 00:05:12.394 "trtype": "tcp", 00:05:12.394 "method": "nvmf_get_transports", 00:05:12.394 "req_id": 1 00:05:12.394 } 00:05:12.394 Got JSON-RPC error response 00:05:12.394 response: 00:05:12.394 { 00:05:12.394 "code": -19, 00:05:12.394 "message": "No such device" 00:05:12.394 } 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.394 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.652 [2024-07-14 21:04:23.943584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.652 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.652 21:04:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.652 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.652 21:04:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.652 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.652 21:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.652 { 00:05:12.652 "subsystems": [ 00:05:12.652 { 00:05:12.652 "subsystem": "vfio_user_target", 00:05:12.652 "config": null 00:05:12.652 }, 00:05:12.652 { 00:05:12.652 "subsystem": "keyring", 00:05:12.652 "config": [] 00:05:12.652 }, 00:05:12.652 { 00:05:12.652 "subsystem": "iobuf", 00:05:12.652 "config": [ 00:05:12.652 { 00:05:12.652 "method": "iobuf_set_options", 00:05:12.652 "params": { 00:05:12.652 "small_pool_count": 8192, 00:05:12.652 "large_pool_count": 1024, 00:05:12.652 "small_bufsize": 8192, 00:05:12.652 "large_bufsize": 135168 00:05:12.652 } 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "sock", 00:05:12.653 "config": [ 00:05:12.653 { 00:05:12.653 "method": "sock_set_default_impl", 00:05:12.653 "params": { 00:05:12.653 "impl_name": "uring" 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "sock_impl_set_options", 00:05:12.653 "params": { 00:05:12.653 "impl_name": "ssl", 00:05:12.653 "recv_buf_size": 4096, 00:05:12.653 "send_buf_size": 4096, 00:05:12.653 "enable_recv_pipe": true, 00:05:12.653 "enable_quickack": false, 00:05:12.653 "enable_placement_id": 0, 00:05:12.653 "enable_zerocopy_send_server": true, 00:05:12.653 "enable_zerocopy_send_client": false, 00:05:12.653 "zerocopy_threshold": 0, 00:05:12.653 "tls_version": 0, 00:05:12.653 "enable_ktls": false 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "sock_impl_set_options", 00:05:12.653 "params": { 00:05:12.653 "impl_name": "posix", 00:05:12.653 "recv_buf_size": 2097152, 00:05:12.653 "send_buf_size": 2097152, 00:05:12.653 "enable_recv_pipe": true, 00:05:12.653 "enable_quickack": false, 00:05:12.653 "enable_placement_id": 0, 00:05:12.653 "enable_zerocopy_send_server": true, 00:05:12.653 "enable_zerocopy_send_client": false, 00:05:12.653 "zerocopy_threshold": 0, 00:05:12.653 "tls_version": 0, 00:05:12.653 "enable_ktls": false 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "sock_impl_set_options", 00:05:12.653 "params": { 00:05:12.653 "impl_name": "uring", 00:05:12.653 "recv_buf_size": 2097152, 00:05:12.653 "send_buf_size": 2097152, 00:05:12.653 "enable_recv_pipe": true, 00:05:12.653 "enable_quickack": false, 00:05:12.653 "enable_placement_id": 0, 00:05:12.653 "enable_zerocopy_send_server": false, 00:05:12.653 "enable_zerocopy_send_client": false, 00:05:12.653 "zerocopy_threshold": 0, 00:05:12.653 "tls_version": 0, 00:05:12.653 "enable_ktls": false 00:05:12.653 } 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "vmd", 00:05:12.653 "config": [] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "accel", 00:05:12.653 "config": [ 00:05:12.653 { 00:05:12.653 "method": "accel_set_options", 00:05:12.653 "params": { 00:05:12.653 "small_cache_size": 128, 00:05:12.653 "large_cache_size": 16, 00:05:12.653 "task_count": 2048, 00:05:12.653 "sequence_count": 2048, 00:05:12.653 "buf_count": 2048 00:05:12.653 } 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "bdev", 00:05:12.653 "config": [ 00:05:12.653 { 00:05:12.653 "method": "bdev_set_options", 00:05:12.653 "params": { 00:05:12.653 "bdev_io_pool_size": 65535, 00:05:12.653 "bdev_io_cache_size": 256, 00:05:12.653 "bdev_auto_examine": true, 00:05:12.653 "iobuf_small_cache_size": 128, 00:05:12.653 "iobuf_large_cache_size": 16 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "bdev_raid_set_options", 00:05:12.653 "params": { 00:05:12.653 "process_window_size_kb": 1024 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "bdev_iscsi_set_options", 00:05:12.653 "params": { 00:05:12.653 "timeout_sec": 30 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "bdev_nvme_set_options", 00:05:12.653 "params": { 00:05:12.653 "action_on_timeout": "none", 00:05:12.653 "timeout_us": 0, 00:05:12.653 "timeout_admin_us": 0, 00:05:12.653 "keep_alive_timeout_ms": 10000, 00:05:12.653 "arbitration_burst": 0, 00:05:12.653 "low_priority_weight": 0, 00:05:12.653 "medium_priority_weight": 0, 00:05:12.653 "high_priority_weight": 0, 00:05:12.653 "nvme_adminq_poll_period_us": 10000, 00:05:12.653 "nvme_ioq_poll_period_us": 0, 00:05:12.653 "io_queue_requests": 0, 00:05:12.653 "delay_cmd_submit": true, 00:05:12.653 "transport_retry_count": 4, 00:05:12.653 "bdev_retry_count": 3, 00:05:12.653 "transport_ack_timeout": 0, 00:05:12.653 "ctrlr_loss_timeout_sec": 0, 00:05:12.653 "reconnect_delay_sec": 0, 00:05:12.653 "fast_io_fail_timeout_sec": 0, 00:05:12.653 "disable_auto_failback": false, 00:05:12.653 "generate_uuids": false, 00:05:12.653 "transport_tos": 0, 00:05:12.653 "nvme_error_stat": false, 00:05:12.653 "rdma_srq_size": 0, 00:05:12.653 "io_path_stat": false, 00:05:12.653 "allow_accel_sequence": false, 00:05:12.653 "rdma_max_cq_size": 0, 00:05:12.653 "rdma_cm_event_timeout_ms": 0, 00:05:12.653 "dhchap_digests": [ 00:05:12.653 "sha256", 00:05:12.653 "sha384", 00:05:12.653 "sha512" 00:05:12.653 ], 00:05:12.653 "dhchap_dhgroups": [ 00:05:12.653 "null", 00:05:12.653 "ffdhe2048", 00:05:12.653 "ffdhe3072", 00:05:12.653 "ffdhe4096", 00:05:12.653 "ffdhe6144", 00:05:12.653 "ffdhe8192" 00:05:12.653 ] 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "bdev_nvme_set_hotplug", 00:05:12.653 "params": { 00:05:12.653 "period_us": 100000, 00:05:12.653 "enable": false 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "bdev_wait_for_examine" 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "scsi", 00:05:12.653 "config": null 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "scheduler", 00:05:12.653 "config": [ 00:05:12.653 { 00:05:12.653 "method": "framework_set_scheduler", 00:05:12.653 "params": { 00:05:12.653 "name": "static" 00:05:12.653 } 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "vhost_scsi", 00:05:12.653 "config": [] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "vhost_blk", 00:05:12.653 "config": [] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "ublk", 00:05:12.653 "config": [] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "nbd", 00:05:12.653 "config": [] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "nvmf", 00:05:12.653 "config": [ 00:05:12.653 { 00:05:12.653 "method": "nvmf_set_config", 00:05:12.653 "params": { 00:05:12.653 "discovery_filter": "match_any", 00:05:12.653 "admin_cmd_passthru": { 00:05:12.653 "identify_ctrlr": false 00:05:12.653 } 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "nvmf_set_max_subsystems", 00:05:12.653 "params": { 00:05:12.653 "max_subsystems": 1024 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "nvmf_set_crdt", 00:05:12.653 "params": { 00:05:12.653 "crdt1": 0, 00:05:12.653 "crdt2": 0, 00:05:12.653 "crdt3": 0 00:05:12.653 } 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "method": "nvmf_create_transport", 00:05:12.653 "params": { 00:05:12.653 "trtype": "TCP", 00:05:12.653 "max_queue_depth": 128, 00:05:12.653 "max_io_qpairs_per_ctrlr": 127, 00:05:12.653 "in_capsule_data_size": 4096, 00:05:12.653 "max_io_size": 131072, 00:05:12.653 "io_unit_size": 131072, 00:05:12.653 "max_aq_depth": 128, 00:05:12.653 "num_shared_buffers": 511, 00:05:12.653 "buf_cache_size": 4294967295, 00:05:12.653 "dif_insert_or_strip": false, 00:05:12.653 "zcopy": false, 00:05:12.653 "c2h_success": true, 00:05:12.653 "sock_priority": 0, 00:05:12.653 "abort_timeout_sec": 1, 00:05:12.653 "ack_timeout": 0, 00:05:12.653 "data_wr_pool_size": 0 00:05:12.653 } 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 }, 00:05:12.653 { 00:05:12.653 "subsystem": "iscsi", 00:05:12.653 "config": [ 00:05:12.653 { 00:05:12.653 "method": "iscsi_set_options", 00:05:12.653 "params": { 00:05:12.653 "node_base": "iqn.2016-06.io.spdk", 00:05:12.653 "max_sessions": 128, 00:05:12.653 "max_connections_per_session": 2, 00:05:12.653 "max_queue_depth": 64, 00:05:12.653 "default_time2wait": 2, 00:05:12.653 "default_time2retain": 20, 00:05:12.653 "first_burst_length": 8192, 00:05:12.653 "immediate_data": true, 00:05:12.653 "allow_duplicated_isid": false, 00:05:12.653 "error_recovery_level": 0, 00:05:12.653 "nop_timeout": 60, 00:05:12.653 "nop_in_interval": 30, 00:05:12.653 "disable_chap": false, 00:05:12.653 "require_chap": false, 00:05:12.653 "mutual_chap": false, 00:05:12.653 "chap_group": 0, 00:05:12.653 "max_large_datain_per_connection": 64, 00:05:12.653 "max_r2t_per_connection": 4, 00:05:12.653 "pdu_pool_size": 36864, 00:05:12.653 "immediate_data_pool_size": 16384, 00:05:12.653 "data_out_pool_size": 2048 00:05:12.653 } 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 } 00:05:12.653 ] 00:05:12.653 } 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59809 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59809 ']' 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59809 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59809 00:05:12.653 killing process with pid 59809 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59809' 00:05:12.653 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59809 00:05:12.654 21:04:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59809 00:05:14.554 21:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59854 00:05:14.554 21:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.554 21:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59854 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59854 ']' 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59854 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59854 00:05:19.814 killing process with pid 59854 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59854' 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59854 00:05:19.814 21:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59854 00:05:21.186 21:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.186 21:04:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.186 ************************************ 00:05:21.186 END TEST skip_rpc_with_json 00:05:21.186 ************************************ 00:05:21.186 00:05:21.186 real 0m9.865s 00:05:21.186 user 0m9.543s 00:05:21.186 sys 0m0.701s 00:05:21.186 21:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.186 21:04:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.444 21:04:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.444 21:04:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:21.444 21:04:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.444 21:04:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.444 21:04:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.444 ************************************ 00:05:21.444 START TEST skip_rpc_with_delay 00:05:21.444 ************************************ 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.444 [2024-07-14 21:04:32.897132] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:21.444 [2024-07-14 21:04:32.897287] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.444 ************************************ 00:05:21.444 END TEST skip_rpc_with_delay 00:05:21.444 ************************************ 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.444 00:05:21.444 real 0m0.197s 00:05:21.444 user 0m0.117s 00:05:21.444 sys 0m0.077s 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.444 21:04:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:21.703 21:04:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.703 21:04:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:21.703 21:04:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:21.703 21:04:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:21.703 21:04:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.703 21:04:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.703 21:04:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.703 ************************************ 00:05:21.703 START TEST exit_on_failed_rpc_init 00:05:21.703 ************************************ 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59982 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59982 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59982 ']' 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.703 21:04:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.703 [2024-07-14 21:04:33.130171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:21.703 [2024-07-14 21:04:33.130309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59982 ] 00:05:21.962 [2024-07-14 21:04:33.285313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.962 [2024-07-14 21:04:33.453392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.220 [2024-07-14 21:04:33.618146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.787 21:04:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.787 [2024-07-14 21:04:34.252649] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.787 [2024-07-14 21:04:34.252866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:05:23.046 [2024-07-14 21:04:34.421957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.304 [2024-07-14 21:04:34.640645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.304 [2024-07-14 21:04:34.640760] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:23.304 [2024-07-14 21:04:34.640820] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:23.304 [2024-07-14 21:04:34.640845] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59982 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59982 ']' 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59982 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59982 00:05:23.562 killing process with pid 59982 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59982' 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59982 00:05:23.562 21:04:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59982 00:05:25.479 00:05:25.479 real 0m3.975s 00:05:25.479 user 0m4.745s 00:05:25.479 sys 0m0.504s 00:05:25.479 21:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.479 ************************************ 00:05:25.479 END TEST exit_on_failed_rpc_init 00:05:25.479 ************************************ 00:05:25.479 21:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.738 21:04:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.738 21:04:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:25.738 ************************************ 00:05:25.738 END TEST skip_rpc 00:05:25.738 ************************************ 00:05:25.738 00:05:25.738 real 0m21.188s 00:05:25.738 user 0m20.930s 00:05:25.738 sys 0m1.779s 00:05:25.738 21:04:37 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.738 21:04:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.738 21:04:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.738 21:04:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.738 21:04:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.738 21:04:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.738 21:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:25.738 ************************************ 00:05:25.738 START TEST rpc_client 00:05:25.738 ************************************ 00:05:25.738 21:04:37 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.738 * Looking for test storage... 00:05:25.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:25.738 21:04:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:25.738 OK 00:05:25.738 21:04:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.738 00:05:25.738 real 0m0.150s 00:05:25.738 user 0m0.071s 00:05:25.738 sys 0m0.084s 00:05:25.738 21:04:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.738 21:04:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:25.738 ************************************ 00:05:25.738 END TEST rpc_client 00:05:25.738 ************************************ 00:05:25.738 21:04:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.738 21:04:37 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:25.738 21:04:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.738 21:04:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.738 21:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 ************************************ 00:05:25.997 START TEST json_config 00:05:25.997 ************************************ 00:05:25.997 21:04:37 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.997 21:04:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.997 21:04:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.997 21:04:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.997 21:04:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.997 21:04:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.997 21:04:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.997 21:04:37 json_config -- paths/export.sh@5 -- # export PATH 00:05:25.997 21:04:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@47 -- # : 0 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.997 21:04:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:25.997 21:04:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:25.998 INFO: JSON configuration test init 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 21:04:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:25.998 21:04:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.998 21:04:37 json_config -- json_config/common.sh@10 -- # shift 00:05:25.998 21:04:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.998 Waiting for target to run... 00:05:25.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.998 21:04:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.998 21:04:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.998 21:04:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.998 21:04:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.998 21:04:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60143 00:05:25.998 21:04:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.998 21:04:37 json_config -- json_config/common.sh@25 -- # waitforlisten 60143 /var/tmp/spdk_tgt.sock 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 60143 ']' 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.998 21:04:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.998 21:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 [2024-07-14 21:04:37.511371] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:25.998 [2024-07-14 21:04:37.511551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 00:05:26.564 [2024-07-14 21:04:37.863951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.565 [2024-07-14 21:04:38.014660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.131 00:05:27.131 21:04:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.131 21:04:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.131 21:04:38 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.131 21:04:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:27.131 21:04:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:27.131 21:04:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.131 21:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.131 21:04:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:27.131 21:04:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:27.131 21:04:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.131 21:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.131 21:04:38 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.131 21:04:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:27.131 21:04:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:27.389 [2024-07-14 21:04:38.895588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:27.953 21:04:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.953 21:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:27.953 21:04:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:27.953 21:04:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:28.210 21:04:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.210 21:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:28.210 21:04:39 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:28.210 21:04:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.211 21:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.211 21:04:39 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.211 21:04:39 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:28.211 21:04:39 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:28.211 21:04:39 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.211 21:04:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.468 MallocForNvmf0 00:05:28.468 21:04:40 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.468 21:04:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.725 MallocForNvmf1 00:05:28.725 21:04:40 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.725 21:04:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.983 [2024-07-14 21:04:40.445557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.983 21:04:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.983 21:04:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.241 21:04:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.241 21:04:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.499 21:04:40 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.499 21:04:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.499 21:04:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.499 21:04:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.758 [2024-07-14 21:04:41.218079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.758 21:04:41 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:29.758 21:04:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.758 21:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.758 21:04:41 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:29.758 21:04:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.758 21:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.016 21:04:41 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:30.016 21:04:41 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.016 21:04:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.275 MallocBdevForConfigChangeCheck 00:05:30.275 21:04:41 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:30.275 21:04:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.275 21:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.275 21:04:41 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:30.275 21:04:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.533 INFO: shutting down applications... 00:05:30.533 21:04:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:30.533 21:04:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:30.533 21:04:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:30.533 21:04:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:30.533 21:04:41 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:30.792 Calling clear_iscsi_subsystem 00:05:30.792 Calling clear_nvmf_subsystem 00:05:30.792 Calling clear_nbd_subsystem 00:05:30.792 Calling clear_ublk_subsystem 00:05:30.792 Calling clear_vhost_blk_subsystem 00:05:30.792 Calling clear_vhost_scsi_subsystem 00:05:30.792 Calling clear_bdev_subsystem 00:05:30.792 21:04:42 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:30.792 21:04:42 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:30.792 21:04:42 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:30.792 21:04:42 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.792 21:04:42 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:30.792 21:04:42 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:31.359 21:04:42 json_config -- json_config/json_config.sh@345 -- # break 00:05:31.359 21:04:42 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:31.359 21:04:42 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:31.359 21:04:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:31.359 21:04:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.359 21:04:42 json_config -- json_config/common.sh@35 -- # [[ -n 60143 ]] 00:05:31.359 21:04:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60143 00:05:31.359 21:04:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.359 21:04:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.359 21:04:42 json_config -- json_config/common.sh@41 -- # kill -0 60143 00:05:31.359 21:04:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.618 21:04:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.618 21:04:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.618 21:04:43 json_config -- json_config/common.sh@41 -- # kill -0 60143 00:05:31.618 21:04:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.188 21:04:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.188 21:04:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.188 21:04:43 json_config -- json_config/common.sh@41 -- # kill -0 60143 00:05:32.188 21:04:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.188 21:04:43 json_config -- json_config/common.sh@43 -- # break 00:05:32.188 SPDK target shutdown done 00:05:32.188 21:04:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.188 21:04:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.188 INFO: relaunching applications... 00:05:32.188 21:04:43 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:32.188 21:04:43 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.188 21:04:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.188 21:04:43 json_config -- json_config/common.sh@10 -- # shift 00:05:32.188 21:04:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.188 21:04:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.188 21:04:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.188 21:04:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.188 21:04:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.188 21:04:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60341 00:05:32.188 Waiting for target to run... 00:05:32.188 21:04:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.188 21:04:43 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.188 21:04:43 json_config -- json_config/common.sh@25 -- # waitforlisten 60341 /var/tmp/spdk_tgt.sock 00:05:32.188 21:04:43 json_config -- common/autotest_common.sh@829 -- # '[' -z 60341 ']' 00:05:32.188 21:04:43 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.188 21:04:43 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.188 21:04:43 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.188 21:04:43 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.188 21:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.446 [2024-07-14 21:04:43.785173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:32.446 [2024-07-14 21:04:43.785357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60341 ] 00:05:32.704 [2024-07-14 21:04:44.111819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.963 [2024-07-14 21:04:44.310661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.221 [2024-07-14 21:04:44.572368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.788 [2024-07-14 21:04:45.129377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.788 [2024-07-14 21:04:45.161606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.788 00:05:33.788 INFO: Checking if target configuration is the same... 00:05:33.788 21:04:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.788 21:04:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:33.788 21:04:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.788 21:04:45 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:33.788 21:04:45 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:33.788 21:04:45 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.788 21:04:45 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:33.788 21:04:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.788 + '[' 2 -ne 2 ']' 00:05:33.788 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:33.788 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:33.788 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:33.788 +++ basename /dev/fd/62 00:05:33.788 ++ mktemp /tmp/62.XXX 00:05:33.788 + tmp_file_1=/tmp/62.6bi 00:05:33.788 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.788 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.788 + tmp_file_2=/tmp/spdk_tgt_config.json.EGO 00:05:33.788 + ret=0 00:05:33.788 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.355 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.355 + diff -u /tmp/62.6bi /tmp/spdk_tgt_config.json.EGO 00:05:34.355 INFO: JSON config files are the same 00:05:34.355 + echo 'INFO: JSON config files are the same' 00:05:34.355 + rm /tmp/62.6bi /tmp/spdk_tgt_config.json.EGO 00:05:34.355 + exit 0 00:05:34.355 INFO: changing configuration and checking if this can be detected... 00:05:34.355 21:04:45 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:34.355 21:04:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:34.355 21:04:45 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.355 21:04:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.614 21:04:45 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:34.614 21:04:45 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.614 21:04:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.614 + '[' 2 -ne 2 ']' 00:05:34.614 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:34.614 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:34.614 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:34.614 +++ basename /dev/fd/62 00:05:34.614 ++ mktemp /tmp/62.XXX 00:05:34.614 + tmp_file_1=/tmp/62.p3W 00:05:34.614 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.614 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.614 + tmp_file_2=/tmp/spdk_tgt_config.json.RsE 00:05:34.614 + ret=0 00:05:34.614 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.873 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.873 + diff -u /tmp/62.p3W /tmp/spdk_tgt_config.json.RsE 00:05:34.873 + ret=1 00:05:34.873 + echo '=== Start of file: /tmp/62.p3W ===' 00:05:34.873 + cat /tmp/62.p3W 00:05:34.873 + echo '=== End of file: /tmp/62.p3W ===' 00:05:34.873 + echo '' 00:05:34.873 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RsE ===' 00:05:34.873 + cat /tmp/spdk_tgt_config.json.RsE 00:05:34.873 + echo '=== End of file: /tmp/spdk_tgt_config.json.RsE ===' 00:05:34.873 + echo '' 00:05:34.873 + rm /tmp/62.p3W /tmp/spdk_tgt_config.json.RsE 00:05:34.873 + exit 1 00:05:34.873 INFO: configuration change detected. 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:34.873 21:04:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.873 21:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@317 -- # [[ -n 60341 ]] 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:34.873 21:04:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.873 21:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:34.873 21:04:46 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:34.873 21:04:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.873 21:04:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.132 21:04:46 json_config -- json_config/json_config.sh@323 -- # killprocess 60341 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@948 -- # '[' -z 60341 ']' 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@952 -- # kill -0 60341 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@953 -- # uname 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60341 00:05:35.132 killing process with pid 60341 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60341' 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@967 -- # kill 60341 00:05:35.132 21:04:46 json_config -- common/autotest_common.sh@972 -- # wait 60341 00:05:35.700 21:04:47 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.700 21:04:47 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:35.700 21:04:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.700 21:04:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.960 INFO: Success 00:05:35.960 21:04:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:35.960 21:04:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:35.960 ************************************ 00:05:35.960 END TEST json_config 00:05:35.960 ************************************ 00:05:35.960 00:05:35.960 real 0m9.992s 00:05:35.960 user 0m13.235s 00:05:35.960 sys 0m1.629s 00:05:35.960 21:04:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.960 21:04:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.960 21:04:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.960 21:04:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.960 21:04:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.960 21:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.960 21:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:35.960 ************************************ 00:05:35.960 START TEST json_config_extra_key 00:05:35.960 ************************************ 00:05:35.960 21:04:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.960 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.960 21:04:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.960 21:04:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.960 21:04:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.960 21:04:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.961 21:04:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.961 21:04:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.961 21:04:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.961 21:04:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.961 21:04:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.961 21:04:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.961 21:04:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.961 21:04:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.961 21:04:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.961 INFO: launching applications... 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.961 21:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.961 Waiting for target to run... 00:05:35.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60499 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60499 /var/tmp/spdk_tgt.sock 00:05:35.961 21:04:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.961 21:04:47 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60499 ']' 00:05:35.961 21:04:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.961 21:04:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.961 21:04:47 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.961 21:04:47 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.961 21:04:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.220 [2024-07-14 21:04:47.540289] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:36.220 [2024-07-14 21:04:47.540695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60499 ] 00:05:36.478 [2024-07-14 21:04:47.878818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.478 [2024-07-14 21:04:48.017716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.737 [2024-07-14 21:04:48.156569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.304 00:05:37.305 INFO: shutting down applications... 00:05:37.305 21:04:48 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.305 21:04:48 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.305 21:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.305 21:04:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60499 ]] 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60499 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60499 00:05:37.305 21:04:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.563 21:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.563 21:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.563 21:04:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60499 00:05:37.563 21:04:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.183 21:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.183 21:04:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.183 21:04:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60499 00:05:38.183 21:04:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.751 21:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.751 21:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.751 21:04:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60499 00:05:38.751 21:04:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60499 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.319 SPDK target shutdown done 00:05:39.319 Success 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.319 21:04:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.319 21:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.319 00:05:39.319 real 0m3.237s 00:05:39.319 user 0m3.192s 00:05:39.319 sys 0m0.452s 00:05:39.319 21:04:50 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.319 21:04:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.319 ************************************ 00:05:39.319 END TEST json_config_extra_key 00:05:39.319 ************************************ 00:05:39.319 21:04:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.319 21:04:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.319 21:04:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.319 21:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.319 21:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.319 ************************************ 00:05:39.319 START TEST alias_rpc 00:05:39.319 ************************************ 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.319 * Looking for test storage... 00:05:39.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:39.319 21:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.319 21:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60584 00:05:39.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.319 21:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60584 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60584 ']' 00:05:39.319 21:04:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.319 21:04:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.319 [2024-07-14 21:04:50.834415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:39.319 [2024-07-14 21:04:50.834961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:05:39.578 [2024-07-14 21:04:51.006739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.837 [2024-07-14 21:04:51.159840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.837 [2024-07-14 21:04:51.311056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.404 21:04:51 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.404 21:04:51 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:40.404 21:04:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:40.664 21:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60584 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60584 ']' 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60584 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60584 00:05:40.664 killing process with pid 60584 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60584' 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@967 -- # kill 60584 00:05:40.664 21:04:52 alias_rpc -- common/autotest_common.sh@972 -- # wait 60584 00:05:42.570 ************************************ 00:05:42.570 END TEST alias_rpc 00:05:42.570 ************************************ 00:05:42.570 00:05:42.570 real 0m3.273s 00:05:42.570 user 0m3.487s 00:05:42.570 sys 0m0.446s 00:05:42.570 21:04:53 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.570 21:04:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.570 21:04:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.570 21:04:53 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:42.570 21:04:53 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.570 21:04:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.570 21:04:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.570 21:04:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.570 ************************************ 00:05:42.570 START TEST spdkcli_tcp 00:05:42.570 ************************************ 00:05:42.570 21:04:53 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.570 * Looking for test storage... 00:05:42.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60678 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60678 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60678 ']' 00:05:42.570 21:04:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.570 21:04:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.830 [2024-07-14 21:04:54.161884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:42.830 [2024-07-14 21:04:54.162076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:05:42.830 [2024-07-14 21:04:54.322249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.089 [2024-07-14 21:04:54.500390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.089 [2024-07-14 21:04:54.500402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.348 [2024-07-14 21:04:54.678025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.916 21:04:55 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.916 21:04:55 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:43.916 21:04:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60695 00:05:43.916 21:04:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.916 21:04:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.916 [ 00:05:43.916 "bdev_malloc_delete", 00:05:43.916 "bdev_malloc_create", 00:05:43.916 "bdev_null_resize", 00:05:43.916 "bdev_null_delete", 00:05:43.916 "bdev_null_create", 00:05:43.916 "bdev_nvme_cuse_unregister", 00:05:43.916 "bdev_nvme_cuse_register", 00:05:43.916 "bdev_opal_new_user", 00:05:43.916 "bdev_opal_set_lock_state", 00:05:43.916 "bdev_opal_delete", 00:05:43.916 "bdev_opal_get_info", 00:05:43.916 "bdev_opal_create", 00:05:43.916 "bdev_nvme_opal_revert", 00:05:43.916 "bdev_nvme_opal_init", 00:05:43.916 "bdev_nvme_send_cmd", 00:05:43.916 "bdev_nvme_get_path_iostat", 00:05:43.916 "bdev_nvme_get_mdns_discovery_info", 00:05:43.916 "bdev_nvme_stop_mdns_discovery", 00:05:43.916 "bdev_nvme_start_mdns_discovery", 00:05:43.916 "bdev_nvme_set_multipath_policy", 00:05:43.916 "bdev_nvme_set_preferred_path", 00:05:43.916 "bdev_nvme_get_io_paths", 00:05:43.916 "bdev_nvme_remove_error_injection", 00:05:43.917 "bdev_nvme_add_error_injection", 00:05:43.917 "bdev_nvme_get_discovery_info", 00:05:43.917 "bdev_nvme_stop_discovery", 00:05:43.917 "bdev_nvme_start_discovery", 00:05:43.917 "bdev_nvme_get_controller_health_info", 00:05:43.917 "bdev_nvme_disable_controller", 00:05:43.917 "bdev_nvme_enable_controller", 00:05:43.917 "bdev_nvme_reset_controller", 00:05:43.917 "bdev_nvme_get_transport_statistics", 00:05:43.917 "bdev_nvme_apply_firmware", 00:05:43.917 "bdev_nvme_detach_controller", 00:05:43.917 "bdev_nvme_get_controllers", 00:05:43.917 "bdev_nvme_attach_controller", 00:05:43.917 "bdev_nvme_set_hotplug", 00:05:43.917 "bdev_nvme_set_options", 00:05:43.917 "bdev_passthru_delete", 00:05:43.917 "bdev_passthru_create", 00:05:43.917 "bdev_lvol_set_parent_bdev", 00:05:43.917 "bdev_lvol_set_parent", 00:05:43.917 "bdev_lvol_check_shallow_copy", 00:05:43.917 "bdev_lvol_start_shallow_copy", 00:05:43.917 "bdev_lvol_grow_lvstore", 00:05:43.917 "bdev_lvol_get_lvols", 00:05:43.917 "bdev_lvol_get_lvstores", 00:05:43.917 "bdev_lvol_delete", 00:05:43.917 "bdev_lvol_set_read_only", 00:05:43.917 "bdev_lvol_resize", 00:05:43.917 "bdev_lvol_decouple_parent", 00:05:43.917 "bdev_lvol_inflate", 00:05:43.917 "bdev_lvol_rename", 00:05:43.917 "bdev_lvol_clone_bdev", 00:05:43.917 "bdev_lvol_clone", 00:05:43.917 "bdev_lvol_snapshot", 00:05:43.917 "bdev_lvol_create", 00:05:43.917 "bdev_lvol_delete_lvstore", 00:05:43.917 "bdev_lvol_rename_lvstore", 00:05:43.917 "bdev_lvol_create_lvstore", 00:05:43.917 "bdev_raid_set_options", 00:05:43.917 "bdev_raid_remove_base_bdev", 00:05:43.917 "bdev_raid_add_base_bdev", 00:05:43.917 "bdev_raid_delete", 00:05:43.917 "bdev_raid_create", 00:05:43.917 "bdev_raid_get_bdevs", 00:05:43.917 "bdev_error_inject_error", 00:05:43.917 "bdev_error_delete", 00:05:43.917 "bdev_error_create", 00:05:43.917 "bdev_split_delete", 00:05:43.917 "bdev_split_create", 00:05:43.917 "bdev_delay_delete", 00:05:43.917 "bdev_delay_create", 00:05:43.917 "bdev_delay_update_latency", 00:05:43.917 "bdev_zone_block_delete", 00:05:43.917 "bdev_zone_block_create", 00:05:43.917 "blobfs_create", 00:05:43.917 "blobfs_detect", 00:05:43.917 "blobfs_set_cache_size", 00:05:43.917 "bdev_aio_delete", 00:05:43.917 "bdev_aio_rescan", 00:05:43.917 "bdev_aio_create", 00:05:43.917 "bdev_ftl_set_property", 00:05:43.917 "bdev_ftl_get_properties", 00:05:43.917 "bdev_ftl_get_stats", 00:05:43.917 "bdev_ftl_unmap", 00:05:43.917 "bdev_ftl_unload", 00:05:43.917 "bdev_ftl_delete", 00:05:43.917 "bdev_ftl_load", 00:05:43.917 "bdev_ftl_create", 00:05:43.917 "bdev_virtio_attach_controller", 00:05:43.917 "bdev_virtio_scsi_get_devices", 00:05:43.917 "bdev_virtio_detach_controller", 00:05:43.917 "bdev_virtio_blk_set_hotplug", 00:05:43.917 "bdev_iscsi_delete", 00:05:43.917 "bdev_iscsi_create", 00:05:43.917 "bdev_iscsi_set_options", 00:05:43.917 "bdev_uring_delete", 00:05:43.917 "bdev_uring_rescan", 00:05:43.917 "bdev_uring_create", 00:05:43.917 "accel_error_inject_error", 00:05:43.917 "ioat_scan_accel_module", 00:05:43.917 "dsa_scan_accel_module", 00:05:43.917 "iaa_scan_accel_module", 00:05:43.917 "vfu_virtio_create_scsi_endpoint", 00:05:43.917 "vfu_virtio_scsi_remove_target", 00:05:43.917 "vfu_virtio_scsi_add_target", 00:05:43.917 "vfu_virtio_create_blk_endpoint", 00:05:43.917 "vfu_virtio_delete_endpoint", 00:05:43.917 "keyring_file_remove_key", 00:05:43.917 "keyring_file_add_key", 00:05:43.917 "keyring_linux_set_options", 00:05:43.917 "iscsi_get_histogram", 00:05:43.917 "iscsi_enable_histogram", 00:05:43.917 "iscsi_set_options", 00:05:43.917 "iscsi_get_auth_groups", 00:05:43.917 "iscsi_auth_group_remove_secret", 00:05:43.917 "iscsi_auth_group_add_secret", 00:05:43.917 "iscsi_delete_auth_group", 00:05:43.917 "iscsi_create_auth_group", 00:05:43.917 "iscsi_set_discovery_auth", 00:05:43.917 "iscsi_get_options", 00:05:43.917 "iscsi_target_node_request_logout", 00:05:43.917 "iscsi_target_node_set_redirect", 00:05:43.917 "iscsi_target_node_set_auth", 00:05:43.917 "iscsi_target_node_add_lun", 00:05:43.917 "iscsi_get_stats", 00:05:43.917 "iscsi_get_connections", 00:05:43.917 "iscsi_portal_group_set_auth", 00:05:43.917 "iscsi_start_portal_group", 00:05:43.917 "iscsi_delete_portal_group", 00:05:43.917 "iscsi_create_portal_group", 00:05:43.917 "iscsi_get_portal_groups", 00:05:43.917 "iscsi_delete_target_node", 00:05:43.917 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.917 "iscsi_target_node_add_pg_ig_maps", 00:05:43.917 "iscsi_create_target_node", 00:05:43.917 "iscsi_get_target_nodes", 00:05:43.917 "iscsi_delete_initiator_group", 00:05:43.917 "iscsi_initiator_group_remove_initiators", 00:05:43.917 "iscsi_initiator_group_add_initiators", 00:05:43.917 "iscsi_create_initiator_group", 00:05:43.917 "iscsi_get_initiator_groups", 00:05:43.917 "nvmf_set_crdt", 00:05:43.917 "nvmf_set_config", 00:05:43.917 "nvmf_set_max_subsystems", 00:05:43.917 "nvmf_stop_mdns_prr", 00:05:43.917 "nvmf_publish_mdns_prr", 00:05:43.917 "nvmf_subsystem_get_listeners", 00:05:43.917 "nvmf_subsystem_get_qpairs", 00:05:43.917 "nvmf_subsystem_get_controllers", 00:05:43.917 "nvmf_get_stats", 00:05:43.917 "nvmf_get_transports", 00:05:43.917 "nvmf_create_transport", 00:05:43.917 "nvmf_get_targets", 00:05:43.917 "nvmf_delete_target", 00:05:43.917 "nvmf_create_target", 00:05:43.917 "nvmf_subsystem_allow_any_host", 00:05:43.917 "nvmf_subsystem_remove_host", 00:05:43.917 "nvmf_subsystem_add_host", 00:05:43.917 "nvmf_ns_remove_host", 00:05:43.917 "nvmf_ns_add_host", 00:05:43.917 "nvmf_subsystem_remove_ns", 00:05:43.917 "nvmf_subsystem_add_ns", 00:05:43.917 "nvmf_subsystem_listener_set_ana_state", 00:05:43.917 "nvmf_discovery_get_referrals", 00:05:43.917 "nvmf_discovery_remove_referral", 00:05:43.917 "nvmf_discovery_add_referral", 00:05:43.917 "nvmf_subsystem_remove_listener", 00:05:43.917 "nvmf_subsystem_add_listener", 00:05:43.917 "nvmf_delete_subsystem", 00:05:43.917 "nvmf_create_subsystem", 00:05:43.917 "nvmf_get_subsystems", 00:05:43.917 "env_dpdk_get_mem_stats", 00:05:43.917 "nbd_get_disks", 00:05:43.917 "nbd_stop_disk", 00:05:43.917 "nbd_start_disk", 00:05:43.917 "ublk_recover_disk", 00:05:43.917 "ublk_get_disks", 00:05:43.917 "ublk_stop_disk", 00:05:43.917 "ublk_start_disk", 00:05:43.917 "ublk_destroy_target", 00:05:43.917 "ublk_create_target", 00:05:43.917 "virtio_blk_create_transport", 00:05:43.917 "virtio_blk_get_transports", 00:05:43.917 "vhost_controller_set_coalescing", 00:05:43.917 "vhost_get_controllers", 00:05:43.917 "vhost_delete_controller", 00:05:43.917 "vhost_create_blk_controller", 00:05:43.917 "vhost_scsi_controller_remove_target", 00:05:43.917 "vhost_scsi_controller_add_target", 00:05:43.917 "vhost_start_scsi_controller", 00:05:43.917 "vhost_create_scsi_controller", 00:05:43.917 "thread_set_cpumask", 00:05:43.917 "framework_get_governor", 00:05:43.917 "framework_get_scheduler", 00:05:43.917 "framework_set_scheduler", 00:05:43.917 "framework_get_reactors", 00:05:43.917 "thread_get_io_channels", 00:05:43.917 "thread_get_pollers", 00:05:43.917 "thread_get_stats", 00:05:43.917 "framework_monitor_context_switch", 00:05:43.917 "spdk_kill_instance", 00:05:43.917 "log_enable_timestamps", 00:05:43.917 "log_get_flags", 00:05:43.917 "log_clear_flag", 00:05:43.917 "log_set_flag", 00:05:43.917 "log_get_level", 00:05:43.917 "log_set_level", 00:05:43.917 "log_get_print_level", 00:05:43.917 "log_set_print_level", 00:05:43.917 "framework_enable_cpumask_locks", 00:05:43.917 "framework_disable_cpumask_locks", 00:05:43.917 "framework_wait_init", 00:05:43.917 "framework_start_init", 00:05:43.917 "scsi_get_devices", 00:05:43.917 "bdev_get_histogram", 00:05:43.917 "bdev_enable_histogram", 00:05:43.917 "bdev_set_qos_limit", 00:05:43.917 "bdev_set_qd_sampling_period", 00:05:43.917 "bdev_get_bdevs", 00:05:43.917 "bdev_reset_iostat", 00:05:43.917 "bdev_get_iostat", 00:05:43.917 "bdev_examine", 00:05:43.917 "bdev_wait_for_examine", 00:05:43.917 "bdev_set_options", 00:05:43.917 "notify_get_notifications", 00:05:43.917 "notify_get_types", 00:05:43.917 "accel_get_stats", 00:05:43.917 "accel_set_options", 00:05:43.917 "accel_set_driver", 00:05:43.917 "accel_crypto_key_destroy", 00:05:43.917 "accel_crypto_keys_get", 00:05:43.917 "accel_crypto_key_create", 00:05:43.917 "accel_assign_opc", 00:05:43.917 "accel_get_module_info", 00:05:43.917 "accel_get_opc_assignments", 00:05:43.917 "vmd_rescan", 00:05:43.917 "vmd_remove_device", 00:05:43.917 "vmd_enable", 00:05:43.917 "sock_get_default_impl", 00:05:43.917 "sock_set_default_impl", 00:05:43.917 "sock_impl_set_options", 00:05:43.917 "sock_impl_get_options", 00:05:43.917 "iobuf_get_stats", 00:05:43.917 "iobuf_set_options", 00:05:43.917 "keyring_get_keys", 00:05:43.917 "framework_get_pci_devices", 00:05:43.917 "framework_get_config", 00:05:43.917 "framework_get_subsystems", 00:05:43.917 "vfu_tgt_set_base_path", 00:05:43.917 "trace_get_info", 00:05:43.917 "trace_get_tpoint_group_mask", 00:05:43.917 "trace_disable_tpoint_group", 00:05:43.917 "trace_enable_tpoint_group", 00:05:43.917 "trace_clear_tpoint_mask", 00:05:43.917 "trace_set_tpoint_mask", 00:05:43.917 "spdk_get_version", 00:05:43.917 "rpc_get_methods" 00:05:43.917 ] 00:05:43.917 21:04:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.917 21:04:55 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.917 21:04:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.176 21:04:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.176 21:04:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60678 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60678 ']' 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60678 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60678 00:05:44.176 killing process with pid 60678 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60678' 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60678 00:05:44.176 21:04:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60678 00:05:46.082 ************************************ 00:05:46.082 END TEST spdkcli_tcp 00:05:46.082 ************************************ 00:05:46.082 00:05:46.082 real 0m3.354s 00:05:46.082 user 0m5.946s 00:05:46.082 sys 0m0.511s 00:05:46.082 21:04:57 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.082 21:04:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.082 21:04:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.082 21:04:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.082 21:04:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.083 21:04:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.083 21:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:46.083 ************************************ 00:05:46.083 START TEST dpdk_mem_utility 00:05:46.083 ************************************ 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.083 * Looking for test storage... 00:05:46.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.083 21:04:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.083 21:04:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60781 00:05:46.083 21:04:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.083 21:04:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60781 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60781 ']' 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.083 21:04:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.083 [2024-07-14 21:04:57.581997] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:46.083 [2024-07-14 21:04:57.582181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60781 ] 00:05:46.342 [2024-07-14 21:04:57.749790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.601 [2024-07-14 21:04:57.903072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.601 [2024-07-14 21:04:58.052728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.171 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.171 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:47.171 21:04:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.171 21:04:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.171 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.171 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.171 { 00:05:47.171 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.171 } 00:05:47.171 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.171 21:04:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.171 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:47.171 1 heaps totaling size 820.000000 MiB 00:05:47.171 size: 820.000000 MiB heap id: 0 00:05:47.171 end heaps---------- 00:05:47.171 8 mempools totaling size 598.116089 MiB 00:05:47.171 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.171 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.171 size: 84.521057 MiB name: bdev_io_60781 00:05:47.171 size: 51.011292 MiB name: evtpool_60781 00:05:47.171 size: 50.003479 MiB name: msgpool_60781 00:05:47.171 size: 21.763794 MiB name: PDU_Pool 00:05:47.171 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.171 size: 0.026123 MiB name: Session_Pool 00:05:47.171 end mempools------- 00:05:47.171 6 memzones totaling size 4.142822 MiB 00:05:47.171 size: 1.000366 MiB name: RG_ring_0_60781 00:05:47.171 size: 1.000366 MiB name: RG_ring_1_60781 00:05:47.171 size: 1.000366 MiB name: RG_ring_4_60781 00:05:47.171 size: 1.000366 MiB name: RG_ring_5_60781 00:05:47.171 size: 0.125366 MiB name: RG_ring_2_60781 00:05:47.171 size: 0.015991 MiB name: RG_ring_3_60781 00:05:47.171 end memzones------- 00:05:47.171 21:04:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.171 heap id: 0 total size: 820.000000 MiB number of busy elements: 296 number of free elements: 18 00:05:47.171 list of free elements. size: 18.452515 MiB 00:05:47.171 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:47.171 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:47.171 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:47.171 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:47.171 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:47.171 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:47.171 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:47.171 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:47.171 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:47.171 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:47.171 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:47.171 element at address: 0x200000200000 with size: 0.830200 MiB 00:05:47.171 element at address: 0x20001b000000 with size: 0.565125 MiB 00:05:47.171 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:47.171 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:47.171 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:47.171 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:47.171 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:47.171 list of standard malloc elements. size: 199.283081 MiB 00:05:47.171 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:47.171 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:47.171 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:47.171 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:47.171 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:47.171 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:47.171 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:47.171 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:47.171 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:47.171 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:47.171 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:47.171 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:47.171 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:47.172 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:47.172 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:47.173 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:47.173 list of memzone associated elements. size: 602.264404 MiB 00:05:47.173 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:47.173 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.173 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:47.173 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.173 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:47.173 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60781_0 00:05:47.173 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:47.173 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60781_0 00:05:47.173 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:47.173 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60781_0 00:05:47.173 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:47.173 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.173 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:47.173 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.173 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:47.173 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60781 00:05:47.173 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:47.173 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60781 00:05:47.173 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:47.173 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60781 00:05:47.173 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:47.173 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.173 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:47.173 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.173 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:47.173 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.173 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:47.173 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.173 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:47.173 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60781 00:05:47.173 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:47.173 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60781 00:05:47.173 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:47.173 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60781 00:05:47.173 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:47.173 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60781 00:05:47.173 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:47.173 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60781 00:05:47.173 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:47.173 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.173 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:47.173 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.173 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:47.173 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.173 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:47.173 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60781 00:05:47.173 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:47.173 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.173 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:47.173 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.173 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:47.173 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60781 00:05:47.173 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:47.173 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.173 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:47.173 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60781 00:05:47.173 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:47.173 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60781 00:05:47.173 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:47.173 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.173 21:04:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.173 21:04:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60781 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60781 ']' 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60781 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60781 00:05:47.173 killing process with pid 60781 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60781' 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60781 00:05:47.173 21:04:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60781 00:05:49.079 00:05:49.079 real 0m3.027s 00:05:49.079 user 0m3.176s 00:05:49.079 sys 0m0.435s 00:05:49.079 21:05:00 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.079 21:05:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.079 ************************************ 00:05:49.079 END TEST dpdk_mem_utility 00:05:49.079 ************************************ 00:05:49.079 21:05:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.079 21:05:00 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.079 21:05:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.079 21:05:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.079 21:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:49.079 ************************************ 00:05:49.079 START TEST event 00:05:49.079 ************************************ 00:05:49.079 21:05:00 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.079 * Looking for test storage... 00:05:49.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.079 21:05:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:49.079 21:05:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.079 21:05:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.079 21:05:00 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:49.079 21:05:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.079 21:05:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.079 ************************************ 00:05:49.079 START TEST event_perf 00:05:49.079 ************************************ 00:05:49.079 21:05:00 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.079 Running I/O for 1 seconds...[2024-07-14 21:05:00.573703] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:49.079 [2024-07-14 21:05:00.573870] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:05:49.338 [2024-07-14 21:05:00.739166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.597 [2024-07-14 21:05:00.901643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.597 [2024-07-14 21:05:00.901732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.597 Running I/O for 1 seconds...[2024-07-14 21:05:00.902377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.597 [2024-07-14 21:05:00.902398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.974 00:05:50.975 lcore 0: 188324 00:05:50.975 lcore 1: 188321 00:05:50.975 lcore 2: 188322 00:05:50.975 lcore 3: 188324 00:05:50.975 done. 00:05:50.975 00:05:50.975 real 0m1.704s 00:05:50.975 user 0m4.480s 00:05:50.975 sys 0m0.096s 00:05:50.975 21:05:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.975 21:05:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.975 ************************************ 00:05:50.975 END TEST event_perf 00:05:50.975 ************************************ 00:05:50.975 21:05:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.975 21:05:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.975 21:05:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.975 21:05:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.975 21:05:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.975 ************************************ 00:05:50.975 START TEST event_reactor 00:05:50.975 ************************************ 00:05:50.975 21:05:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.975 [2024-07-14 21:05:02.329741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.975 [2024-07-14 21:05:02.329931] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60915 ] 00:05:50.975 [2024-07-14 21:05:02.498069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.234 [2024-07-14 21:05:02.652184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.624 test_start 00:05:52.624 oneshot 00:05:52.624 tick 100 00:05:52.624 tick 100 00:05:52.624 tick 250 00:05:52.624 tick 100 00:05:52.624 tick 100 00:05:52.624 tick 100 00:05:52.624 tick 250 00:05:52.624 tick 500 00:05:52.624 tick 100 00:05:52.624 tick 100 00:05:52.624 tick 250 00:05:52.624 tick 100 00:05:52.624 tick 100 00:05:52.624 test_end 00:05:52.624 00:05:52.624 real 0m1.706s 00:05:52.624 user 0m1.495s 00:05:52.624 sys 0m0.098s 00:05:52.624 ************************************ 00:05:52.624 END TEST event_reactor 00:05:52.624 ************************************ 00:05:52.624 21:05:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.624 21:05:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 21:05:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:52.624 21:05:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.624 21:05:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.624 21:05:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.624 21:05:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.624 ************************************ 00:05:52.624 START TEST event_reactor_perf 00:05:52.624 ************************************ 00:05:52.624 21:05:04 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.624 [2024-07-14 21:05:04.087080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:52.624 [2024-07-14 21:05:04.087280] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60951 ] 00:05:52.882 [2024-07-14 21:05:04.257011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.140 [2024-07-14 21:05:04.456258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.517 test_start 00:05:54.517 test_end 00:05:54.517 Performance: 326186 events per second 00:05:54.517 00:05:54.517 real 0m1.736s 00:05:54.517 user 0m1.539s 00:05:54.517 sys 0m0.086s 00:05:54.517 21:05:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.517 21:05:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.517 ************************************ 00:05:54.517 END TEST event_reactor_perf 00:05:54.517 ************************************ 00:05:54.517 21:05:05 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.517 21:05:05 event -- event/event.sh@49 -- # uname -s 00:05:54.517 21:05:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.517 21:05:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.517 21:05:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.517 21:05:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.517 21:05:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.517 ************************************ 00:05:54.517 START TEST event_scheduler 00:05:54.518 ************************************ 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.518 * Looking for test storage... 00:05:54.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:54.518 21:05:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.518 21:05:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61014 00:05:54.518 21:05:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.518 21:05:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61014 00:05:54.518 21:05:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 61014 ']' 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.518 21:05:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.518 [2024-07-14 21:05:06.017870] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:54.518 [2024-07-14 21:05:06.018088] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61014 ] 00:05:54.776 [2024-07-14 21:05:06.192005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.034 [2024-07-14 21:05:06.412190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.034 [2024-07-14 21:05:06.412301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.034 [2024-07-14 21:05:06.412430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.034 [2024-07-14 21:05:06.412553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.601 21:05:06 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.601 21:05:06 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:55.601 21:05:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.601 21:05:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.601 21:05:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.601 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.601 POWER: Cannot set governor of lcore 0 to performance 00:05:55.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.601 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.601 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.601 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:55.601 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:55.601 POWER: Unable to set Power Management Environment for lcore 0 00:05:55.601 [2024-07-14 21:05:06.891027] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:55.601 [2024-07-14 21:05:06.891060] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:55.601 [2024-07-14 21:05:06.891104] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.601 [2024-07-14 21:05:06.891173] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.601 [2024-07-14 21:05:06.891207] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.601 [2024-07-14 21:05:06.891225] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.601 21:05:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.602 21:05:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.602 21:05:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.602 21:05:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.602 [2024-07-14 21:05:07.044667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.602 [2024-07-14 21:05:07.125277] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.602 21:05:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.602 21:05:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.602 21:05:07 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.602 21:05:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.602 21:05:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.602 ************************************ 00:05:55.602 START TEST scheduler_create_thread 00:05:55.602 ************************************ 00:05:55.602 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:55.602 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.602 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.602 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.602 2 00:05:55.602 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.602 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 3 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 4 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 5 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 6 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 7 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 8 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 9 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 10 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.861 21:05:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.797 21:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.797 21:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.797 21:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.797 21:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.797 21:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.730 21:05:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.730 00:05:57.730 real 0m2.137s 00:05:57.730 user 0m0.020s 00:05:57.730 sys 0m0.006s 00:05:57.730 21:05:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.730 ************************************ 00:05:57.730 21:05:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.730 END TEST scheduler_create_thread 00:05:57.730 ************************************ 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:57.988 21:05:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.988 21:05:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61014 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 61014 ']' 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 61014 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61014 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:57.988 killing process with pid 61014 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61014' 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 61014 00:05:57.988 21:05:09 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 61014 00:05:58.246 [2024-07-14 21:05:09.754099] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.180 00:05:59.180 real 0m4.888s 00:05:59.180 user 0m7.970s 00:05:59.180 sys 0m0.401s 00:05:59.181 21:05:10 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.181 21:05:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.181 ************************************ 00:05:59.181 END TEST event_scheduler 00:05:59.181 ************************************ 00:05:59.438 21:05:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.438 21:05:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.438 21:05:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.438 21:05:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.438 21:05:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.438 21:05:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.438 ************************************ 00:05:59.438 START TEST app_repeat 00:05:59.438 ************************************ 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61120 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.438 Process app_repeat pid: 61120 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61120' 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.438 spdk_app_start Round 0 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.438 21:05:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61120 /var/tmp/spdk-nbd.sock 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61120 ']' 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.438 21:05:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.438 [2024-07-14 21:05:10.831301] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:59.438 [2024-07-14 21:05:10.831457] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61120 ] 00:05:59.696 [2024-07-14 21:05:10.987606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.696 [2024-07-14 21:05:11.147170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.696 [2024-07-14 21:05:11.147183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.954 [2024-07-14 21:05:11.308216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.521 21:05:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.521 21:05:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.521 21:05:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.780 Malloc0 00:06:00.780 21:05:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.038 Malloc1 00:06:01.038 21:05:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.038 /dev/nbd0 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.038 21:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.038 21:05:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.038 21:05:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.038 21:05:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.038 21:05:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.038 21:05:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.039 21:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.039 21:05:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.039 21:05:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.039 21:05:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.039 1+0 records in 00:06:01.039 1+0 records out 00:06:01.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352625 s, 11.6 MB/s 00:06:01.039 21:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.297 21:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.297 21:05:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.297 21:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.297 21:05:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.297 21:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.297 21:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.297 21:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.297 /dev/nbd1 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.555 1+0 records in 00:06:01.555 1+0 records out 00:06:01.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206725 s, 19.8 MB/s 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.555 21:05:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.555 21:05:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.814 { 00:06:01.814 "nbd_device": "/dev/nbd0", 00:06:01.814 "bdev_name": "Malloc0" 00:06:01.814 }, 00:06:01.814 { 00:06:01.814 "nbd_device": "/dev/nbd1", 00:06:01.814 "bdev_name": "Malloc1" 00:06:01.814 } 00:06:01.814 ]' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.814 { 00:06:01.814 "nbd_device": "/dev/nbd0", 00:06:01.814 "bdev_name": "Malloc0" 00:06:01.814 }, 00:06:01.814 { 00:06:01.814 "nbd_device": "/dev/nbd1", 00:06:01.814 "bdev_name": "Malloc1" 00:06:01.814 } 00:06:01.814 ]' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.814 /dev/nbd1' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.814 /dev/nbd1' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.814 256+0 records in 00:06:01.814 256+0 records out 00:06:01.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00963541 s, 109 MB/s 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.814 256+0 records in 00:06:01.814 256+0 records out 00:06:01.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284118 s, 36.9 MB/s 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.814 256+0 records in 00:06:01.814 256+0 records out 00:06:01.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293634 s, 35.7 MB/s 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.814 21:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.072 21:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.332 21:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.591 21:05:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.591 21:05:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.159 21:05:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.095 [2024-07-14 21:05:15.475270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.095 [2024-07-14 21:05:15.631338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.095 [2024-07-14 21:05:15.631346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.354 [2024-07-14 21:05:15.778226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.354 [2024-07-14 21:05:15.778385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.354 [2024-07-14 21:05:15.778411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.255 21:05:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.255 spdk_app_start Round 1 00:06:06.255 21:05:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.255 21:05:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61120 /var/tmp/spdk-nbd.sock 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61120 ']' 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.255 21:05:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.255 21:05:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.513 Malloc0 00:06:06.513 21:05:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.772 Malloc1 00:06:06.772 21:05:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.772 21:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.030 /dev/nbd0 00:06:07.030 21:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.030 21:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.030 1+0 records in 00:06:07.030 1+0 records out 00:06:07.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333101 s, 12.3 MB/s 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.030 21:05:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.030 21:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.030 21:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.030 21:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.289 /dev/nbd1 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.289 1+0 records in 00:06:07.289 1+0 records out 00:06:07.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341686 s, 12.0 MB/s 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.289 21:05:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.289 21:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.548 21:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.548 { 00:06:07.548 "nbd_device": "/dev/nbd0", 00:06:07.548 "bdev_name": "Malloc0" 00:06:07.548 }, 00:06:07.548 { 00:06:07.548 "nbd_device": "/dev/nbd1", 00:06:07.548 "bdev_name": "Malloc1" 00:06:07.548 } 00:06:07.548 ]' 00:06:07.548 21:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.548 { 00:06:07.548 "nbd_device": "/dev/nbd0", 00:06:07.548 "bdev_name": "Malloc0" 00:06:07.548 }, 00:06:07.548 { 00:06:07.548 "nbd_device": "/dev/nbd1", 00:06:07.548 "bdev_name": "Malloc1" 00:06:07.548 } 00:06:07.548 ]' 00:06:07.548 21:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.548 /dev/nbd1' 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.548 /dev/nbd1' 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.548 256+0 records in 00:06:07.548 256+0 records out 00:06:07.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107341 s, 97.7 MB/s 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.548 256+0 records in 00:06:07.548 256+0 records out 00:06:07.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026051 s, 40.3 MB/s 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.548 21:05:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.807 256+0 records in 00:06:07.807 256+0 records out 00:06:07.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319979 s, 32.8 MB/s 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.807 21:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.102 21:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.103 21:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.383 21:05:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.383 21:05:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.950 21:05:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.886 [2024-07-14 21:05:21.243522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.886 [2024-07-14 21:05:21.386570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.886 [2024-07-14 21:05:21.386570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.145 [2024-07-14 21:05:21.538450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.145 [2024-07-14 21:05:21.538570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.145 [2024-07-14 21:05:21.538590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.050 21:05:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.050 spdk_app_start Round 2 00:06:12.050 21:05:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.050 21:05:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61120 /var/tmp/spdk-nbd.sock 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61120 ']' 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.050 21:05:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.050 21:05:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.308 Malloc0 00:06:12.309 21:05:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.877 Malloc1 00:06:12.877 21:05:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.877 /dev/nbd0 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.877 21:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.877 21:05:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.877 1+0 records in 00:06:12.877 1+0 records out 00:06:12.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003271 s, 12.5 MB/s 00:06:12.878 21:05:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.878 21:05:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.878 21:05:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.878 21:05:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.878 21:05:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.878 21:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.878 21:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.878 21:05:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.137 /dev/nbd1 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.137 1+0 records in 00:06:13.137 1+0 records out 00:06:13.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241907 s, 16.9 MB/s 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.137 21:05:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.137 21:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.396 { 00:06:13.396 "nbd_device": "/dev/nbd0", 00:06:13.396 "bdev_name": "Malloc0" 00:06:13.396 }, 00:06:13.396 { 00:06:13.396 "nbd_device": "/dev/nbd1", 00:06:13.396 "bdev_name": "Malloc1" 00:06:13.396 } 00:06:13.396 ]' 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.396 { 00:06:13.396 "nbd_device": "/dev/nbd0", 00:06:13.396 "bdev_name": "Malloc0" 00:06:13.396 }, 00:06:13.396 { 00:06:13.396 "nbd_device": "/dev/nbd1", 00:06:13.396 "bdev_name": "Malloc1" 00:06:13.396 } 00:06:13.396 ]' 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.396 /dev/nbd1' 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.396 /dev/nbd1' 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.396 21:05:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.397 256+0 records in 00:06:13.397 256+0 records out 00:06:13.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630264 s, 166 MB/s 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.397 21:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.656 256+0 records in 00:06:13.656 256+0 records out 00:06:13.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298613 s, 35.1 MB/s 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.656 256+0 records in 00:06:13.656 256+0 records out 00:06:13.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029097 s, 36.0 MB/s 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.656 21:05:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.656 21:05:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.916 21:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.485 21:05:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.485 21:05:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.744 21:05:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.696 [2024-07-14 21:05:27.241536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.955 [2024-07-14 21:05:27.397045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.955 [2024-07-14 21:05:27.397052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.214 [2024-07-14 21:05:27.554236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.214 [2024-07-14 21:05:27.554379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.214 [2024-07-14 21:05:27.554405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.120 21:05:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61120 /var/tmp/spdk-nbd.sock 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61120 ']' 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:18.120 21:05:29 event.app_repeat -- event/event.sh@39 -- # killprocess 61120 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 61120 ']' 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 61120 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61120 00:06:18.120 killing process with pid 61120 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61120' 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 61120 00:06:18.120 21:05:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 61120 00:06:19.055 spdk_app_start is called in Round 0. 00:06:19.055 Shutdown signal received, stop current app iteration 00:06:19.056 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:19.056 spdk_app_start is called in Round 1. 00:06:19.056 Shutdown signal received, stop current app iteration 00:06:19.056 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:19.056 spdk_app_start is called in Round 2. 00:06:19.056 Shutdown signal received, stop current app iteration 00:06:19.056 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:19.056 spdk_app_start is called in Round 3. 00:06:19.056 Shutdown signal received, stop current app iteration 00:06:19.056 ************************************ 00:06:19.056 END TEST app_repeat 00:06:19.056 ************************************ 00:06:19.056 21:05:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.056 21:05:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.056 00:06:19.056 real 0m19.682s 00:06:19.056 user 0m42.520s 00:06:19.056 sys 0m2.404s 00:06:19.056 21:05:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.056 21:05:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.056 21:05:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:19.056 21:05:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:19.056 21:05:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:19.056 21:05:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.056 21:05:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.056 21:05:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.056 ************************************ 00:06:19.056 START TEST cpu_locks 00:06:19.056 ************************************ 00:06:19.056 21:05:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:19.056 * Looking for test storage... 00:06:19.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:19.056 21:05:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:19.056 21:05:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:19.056 21:05:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:19.056 21:05:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:19.056 21:05:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.056 21:05:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.056 21:05:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.314 ************************************ 00:06:19.314 START TEST default_locks 00:06:19.314 ************************************ 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61559 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61559 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61559 ']' 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.314 21:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.314 [2024-07-14 21:05:30.762621] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:19.314 [2024-07-14 21:05:30.762845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61559 ] 00:06:19.572 [2024-07-14 21:05:30.933667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.572 [2024-07-14 21:05:31.082853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.831 [2024-07-14 21:05:31.233970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.397 21:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.397 21:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:20.397 21:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61559 00:06:20.397 21:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61559 00:06:20.397 21:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61559 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61559 ']' 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61559 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61559 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.655 killing process with pid 61559 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61559' 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61559 00:06:20.655 21:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61559 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61559 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61559 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61559 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61559 ']' 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61559) - No such process 00:06:22.586 ERROR: process (pid: 61559) is no longer running 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.586 00:06:22.586 real 0m3.404s 00:06:22.586 user 0m3.507s 00:06:22.586 sys 0m0.590s 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.586 ************************************ 00:06:22.586 END TEST default_locks 00:06:22.586 ************************************ 00:06:22.586 21:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.586 21:05:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.586 21:05:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.586 21:05:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.586 21:05:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.586 21:05:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.586 ************************************ 00:06:22.586 START TEST default_locks_via_rpc 00:06:22.586 ************************************ 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:22.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61629 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61629 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61629 ']' 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.586 21:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.843 [2024-07-14 21:05:34.194292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.843 [2024-07-14 21:05:34.194473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61629 ] 00:06:22.843 [2024-07-14 21:05:34.365074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.100 [2024-07-14 21:05:34.524445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.358 [2024-07-14 21:05:34.669559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61629 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61629 00:06:23.617 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61629 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61629 ']' 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61629 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61629 00:06:24.184 killing process with pid 61629 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61629' 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61629 00:06:24.184 21:05:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61629 00:06:26.089 00:06:26.089 real 0m3.309s 00:06:26.089 user 0m3.365s 00:06:26.089 sys 0m0.609s 00:06:26.089 21:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.089 21:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.089 ************************************ 00:06:26.089 END TEST default_locks_via_rpc 00:06:26.089 ************************************ 00:06:26.089 21:05:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:26.089 21:05:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.089 21:05:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.089 21:05:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.089 21:05:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.089 ************************************ 00:06:26.089 START TEST non_locking_app_on_locked_coremask 00:06:26.089 ************************************ 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61696 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61696 /var/tmp/spdk.sock 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61696 ']' 00:06:26.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.089 21:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.089 [2024-07-14 21:05:37.553624] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:26.089 [2024-07-14 21:05:37.553845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:06:26.348 [2024-07-14 21:05:37.714456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.348 [2024-07-14 21:05:37.875374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.607 [2024-07-14 21:05:38.043872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61713 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61713 /var/tmp/spdk2.sock 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61713 ']' 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.173 21:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.173 [2024-07-14 21:05:38.599626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:27.173 [2024-07-14 21:05:38.599811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61713 ] 00:06:27.431 [2024-07-14 21:05:38.766478] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.431 [2024-07-14 21:05:38.766556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.689 [2024-07-14 21:05:39.108727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.948 [2024-07-14 21:05:39.432734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.883 21:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.883 21:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:28.883 21:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61696 00:06:28.883 21:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61696 00:06:28.883 21:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61696 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61696 ']' 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61696 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61696 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.820 killing process with pid 61696 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61696' 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61696 00:06:29.820 21:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61696 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61713 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61713 ']' 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61713 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61713 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.031 killing process with pid 61713 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61713' 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61713 00:06:34.031 21:05:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61713 00:06:35.406 00:06:35.406 real 0m9.342s 00:06:35.406 user 0m9.761s 00:06:35.406 sys 0m1.114s 00:06:35.406 ************************************ 00:06:35.406 END TEST non_locking_app_on_locked_coremask 00:06:35.406 21:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.406 21:05:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.406 ************************************ 00:06:35.406 21:05:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:35.406 21:05:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.406 21:05:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.406 21:05:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.406 21:05:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.406 ************************************ 00:06:35.406 START TEST locking_app_on_unlocked_coremask 00:06:35.406 ************************************ 00:06:35.406 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:35.406 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61834 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61834 /var/tmp/spdk.sock 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61834 ']' 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.407 21:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.407 [2024-07-14 21:05:46.952587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:35.407 [2024-07-14 21:05:46.952786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:06:35.665 [2024-07-14 21:05:47.120709] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.665 [2024-07-14 21:05:47.120777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.924 [2024-07-14 21:05:47.288451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.924 [2024-07-14 21:05:47.443013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61854 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61854 /var/tmp/spdk2.sock 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61854 ']' 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.498 21:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.498 [2024-07-14 21:05:48.005258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:36.498 [2024-07-14 21:05:48.005425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61854 ] 00:06:36.773 [2024-07-14 21:05:48.171450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.032 [2024-07-14 21:05:48.482110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.291 [2024-07-14 21:05:48.796139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.227 21:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.227 21:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:38.227 21:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61854 00:06:38.227 21:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61854 00:06:38.227 21:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61834 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61834 ']' 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61834 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61834 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.163 killing process with pid 61834 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61834' 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61834 00:06:39.163 21:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61834 00:06:43.348 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61854 00:06:43.348 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61854 ']' 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61854 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61854 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.349 killing process with pid 61854 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61854' 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61854 00:06:43.349 21:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61854 00:06:44.722 00:06:44.722 real 0m9.301s 00:06:44.722 user 0m9.687s 00:06:44.722 sys 0m1.126s 00:06:44.722 ************************************ 00:06:44.722 END TEST locking_app_on_unlocked_coremask 00:06:44.722 ************************************ 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.722 21:05:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.722 21:05:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.722 21:05:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.722 21:05:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.722 21:05:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.722 ************************************ 00:06:44.722 START TEST locking_app_on_locked_coremask 00:06:44.722 ************************************ 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61976 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61976 /var/tmp/spdk.sock 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61976 ']' 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.722 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.723 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.723 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.723 21:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.981 [2024-07-14 21:05:56.343396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:44.981 [2024-07-14 21:05:56.343577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61976 ] 00:06:44.981 [2024-07-14 21:05:56.509677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.239 [2024-07-14 21:05:56.706315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.496 [2024-07-14 21:05:56.863134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62002 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62002 /var/tmp/spdk2.sock 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62002 /var/tmp/spdk2.sock 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62002 /var/tmp/spdk2.sock 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62002 ']' 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.061 21:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.061 [2024-07-14 21:05:57.464043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.061 [2024-07-14 21:05:57.464213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62002 ] 00:06:46.319 [2024-07-14 21:05:57.631561] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61976 has claimed it. 00:06:46.319 [2024-07-14 21:05:57.631655] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.884 ERROR: process (pid: 62002) is no longer running 00:06:46.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62002) - No such process 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61976 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61976 00:06:46.884 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61976 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61976 ']' 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61976 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61976 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.142 killing process with pid 61976 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61976' 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61976 00:06:47.142 21:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61976 00:06:49.042 00:06:49.042 real 0m4.240s 00:06:49.042 user 0m4.582s 00:06:49.042 sys 0m0.734s 00:06:49.042 ************************************ 00:06:49.042 END TEST locking_app_on_locked_coremask 00:06:49.042 ************************************ 00:06:49.042 21:06:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.042 21:06:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.042 21:06:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.042 21:06:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.042 21:06:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.042 21:06:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.042 21:06:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.042 ************************************ 00:06:49.042 START TEST locking_overlapped_coremask 00:06:49.042 ************************************ 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62063 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62063 /var/tmp/spdk.sock 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62063 ']' 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.042 21:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.301 [2024-07-14 21:06:00.601818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:49.301 [2024-07-14 21:06:00.601998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62063 ] 00:06:49.301 [2024-07-14 21:06:00.771079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.559 [2024-07-14 21:06:00.936749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.560 [2024-07-14 21:06:00.936891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.560 [2024-07-14 21:06:00.936897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.560 [2024-07-14 21:06:01.101355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62086 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62086 /var/tmp/spdk2.sock 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62086 /var/tmp/spdk2.sock 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62086 /var/tmp/spdk2.sock 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62086 ']' 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.127 21:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.385 [2024-07-14 21:06:01.722451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:50.385 [2024-07-14 21:06:01.722637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62086 ] 00:06:50.385 [2024-07-14 21:06:01.895190] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62063 has claimed it. 00:06:50.385 [2024-07-14 21:06:01.895284] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.953 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62086) - No such process 00:06:50.953 ERROR: process (pid: 62086) is no longer running 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62063 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 62063 ']' 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 62063 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62063 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62063' 00:06:50.953 killing process with pid 62063 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 62063 00:06:50.953 21:06:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 62063 00:06:52.864 00:06:52.864 real 0m3.869s 00:06:52.864 user 0m10.238s 00:06:52.864 sys 0m0.508s 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.864 ************************************ 00:06:52.864 END TEST locking_overlapped_coremask 00:06:52.864 ************************************ 00:06:52.864 21:06:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.864 21:06:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:52.864 21:06:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.864 21:06:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.864 21:06:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.864 ************************************ 00:06:52.864 START TEST locking_overlapped_coremask_via_rpc 00:06:52.864 ************************************ 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62139 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62139 /var/tmp/spdk.sock 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62139 ']' 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:52.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.864 21:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.121 [2024-07-14 21:06:04.522912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:53.121 [2024-07-14 21:06:04.523589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62139 ] 00:06:53.380 [2024-07-14 21:06:04.695930] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.380 [2024-07-14 21:06:04.696218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.380 [2024-07-14 21:06:04.919898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.380 [2024-07-14 21:06:04.919960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.380 [2024-07-14 21:06:04.919945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.639 [2024-07-14 21:06:05.080934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62163 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62163 /var/tmp/spdk2.sock 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62163 ']' 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.208 21:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.208 [2024-07-14 21:06:05.682186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:54.208 [2024-07-14 21:06:05.682390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62163 ] 00:06:54.467 [2024-07-14 21:06:05.860698] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.467 [2024-07-14 21:06:05.860805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.725 [2024-07-14 21:06:06.190675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.725 [2024-07-14 21:06:06.193926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.725 [2024-07-14 21:06:06.193948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:55.294 [2024-07-14 21:06:06.536117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.231 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.232 [2024-07-14 21:06:07.526048] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62139 has claimed it. 00:06:56.232 request: 00:06:56.232 { 00:06:56.232 "method": "framework_enable_cpumask_locks", 00:06:56.232 "req_id": 1 00:06:56.232 } 00:06:56.232 Got JSON-RPC error response 00:06:56.232 response: 00:06:56.232 { 00:06:56.232 "code": -32603, 00:06:56.232 "message": "Failed to claim CPU core: 2" 00:06:56.232 } 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62139 /var/tmp/spdk.sock 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62139 ']' 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.232 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62163 /var/tmp/spdk2.sock 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62163 ']' 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.491 21:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.751 ************************************ 00:06:56.751 END TEST locking_overlapped_coremask_via_rpc 00:06:56.751 ************************************ 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.751 00:06:56.751 real 0m3.660s 00:06:56.751 user 0m1.308s 00:06:56.751 sys 0m0.175s 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.751 21:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.751 21:06:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:56.751 21:06:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62139 ]] 00:06:56.751 21:06:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62139 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62139 ']' 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62139 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62139 00:06:56.751 killing process with pid 62139 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.751 21:06:08 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62139' 00:06:56.752 21:06:08 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62139 00:06:56.752 21:06:08 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62139 00:06:58.657 21:06:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62163 ]] 00:06:58.657 21:06:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62163 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62163 ']' 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62163 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62163 00:06:58.657 killing process with pid 62163 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62163' 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62163 00:06:58.657 21:06:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62163 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62139 ]] 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62139 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62139 ']' 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62139 00:07:00.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62139) - No such process 00:07:00.562 Process with pid 62139 is not found 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62139 is not found' 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62163 ]] 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62163 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62163 ']' 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62163 00:07:00.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62163) - No such process 00:07:00.562 Process with pid 62163 is not found 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62163 is not found' 00:07:00.562 21:06:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.562 00:07:00.562 real 0m41.524s 00:07:00.562 user 1m10.400s 00:07:00.562 sys 0m5.790s 00:07:00.562 21:06:12 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.562 ************************************ 00:07:00.563 END TEST cpu_locks 00:07:00.563 ************************************ 00:07:00.563 21:06:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.563 21:06:12 event -- common/autotest_common.sh@1142 -- # return 0 00:07:00.563 00:07:00.563 real 1m11.637s 00:07:00.563 user 2m8.518s 00:07:00.563 sys 0m9.120s 00:07:00.563 ************************************ 00:07:00.563 END TEST event 00:07:00.563 ************************************ 00:07:00.563 21:06:12 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.563 21:06:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.822 21:06:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.822 21:06:12 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.822 21:06:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.822 21:06:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.822 21:06:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.822 ************************************ 00:07:00.822 START TEST thread 00:07:00.822 ************************************ 00:07:00.822 21:06:12 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.822 * Looking for test storage... 00:07:00.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:00.822 21:06:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.822 21:06:12 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:00.822 21:06:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.822 21:06:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.822 ************************************ 00:07:00.822 START TEST thread_poller_perf 00:07:00.822 ************************************ 00:07:00.822 21:06:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.822 [2024-07-14 21:06:12.259874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:00.822 [2024-07-14 21:06:12.260035] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62331 ] 00:07:01.080 [2024-07-14 21:06:12.430562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.339 [2024-07-14 21:06:12.659217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.339 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:02.715 ====================================== 00:07:02.715 busy:2213965804 (cyc) 00:07:02.715 total_run_count: 354000 00:07:02.715 tsc_hz: 2200000000 (cyc) 00:07:02.715 ====================================== 00:07:02.715 poller_cost: 6254 (cyc), 2842 (nsec) 00:07:02.715 00:07:02.715 real 0m1.797s 00:07:02.715 user 0m1.589s 00:07:02.715 sys 0m0.095s 00:07:02.715 21:06:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.715 21:06:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.715 ************************************ 00:07:02.715 END TEST thread_poller_perf 00:07:02.715 ************************************ 00:07:02.715 21:06:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:02.715 21:06:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.715 21:06:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:02.715 21:06:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.715 21:06:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.715 ************************************ 00:07:02.715 START TEST thread_poller_perf 00:07:02.715 ************************************ 00:07:02.715 21:06:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.715 [2024-07-14 21:06:14.102003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.715 [2024-07-14 21:06:14.102276] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62368 ] 00:07:02.715 [2024-07-14 21:06:14.256557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.974 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.974 [2024-07-14 21:06:14.410459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.352 ====================================== 00:07:04.352 busy:2203888374 (cyc) 00:07:04.352 total_run_count: 4483000 00:07:04.352 tsc_hz: 2200000000 (cyc) 00:07:04.352 ====================================== 00:07:04.352 poller_cost: 491 (cyc), 223 (nsec) 00:07:04.352 00:07:04.352 real 0m1.673s 00:07:04.352 user 0m1.477s 00:07:04.352 sys 0m0.086s 00:07:04.352 21:06:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.352 21:06:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 ************************************ 00:07:04.352 END TEST thread_poller_perf 00:07:04.352 ************************************ 00:07:04.352 21:06:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:04.352 21:06:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:04.352 ************************************ 00:07:04.352 END TEST thread 00:07:04.352 ************************************ 00:07:04.352 00:07:04.352 real 0m3.663s 00:07:04.352 user 0m3.126s 00:07:04.352 sys 0m0.302s 00:07:04.352 21:06:15 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.352 21:06:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 21:06:15 -- common/autotest_common.sh@1142 -- # return 0 00:07:04.352 21:06:15 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:04.352 21:06:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.352 21:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.352 21:06:15 -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 ************************************ 00:07:04.352 START TEST accel 00:07:04.352 ************************************ 00:07:04.352 21:06:15 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:04.611 * Looking for test storage... 00:07:04.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:04.611 21:06:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:04.611 21:06:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:04.611 21:06:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.611 21:06:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62449 00:07:04.611 21:06:15 accel -- accel/accel.sh@63 -- # waitforlisten 62449 00:07:04.611 21:06:15 accel -- common/autotest_common.sh@829 -- # '[' -z 62449 ']' 00:07:04.611 21:06:15 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.611 21:06:15 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.611 21:06:15 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.611 21:06:15 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.611 21:06:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.611 21:06:15 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:04.611 21:06:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:04.611 21:06:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.611 21:06:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.611 21:06:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.611 21:06:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.611 21:06:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.611 21:06:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:04.611 21:06:15 accel -- accel/accel.sh@41 -- # jq -r . 00:07:04.611 [2024-07-14 21:06:16.047839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:04.611 [2024-07-14 21:06:16.048022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62449 ] 00:07:04.869 [2024-07-14 21:06:16.219365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.870 [2024-07-14 21:06:16.377141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.129 [2024-07-14 21:06:16.525090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.698 21:06:16 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.698 21:06:16 accel -- common/autotest_common.sh@862 -- # return 0 00:07:05.698 21:06:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:05.698 21:06:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:05.698 21:06:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:05.698 21:06:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:05.698 21:06:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:05.698 21:06:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:05.698 21:06:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.698 21:06:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.698 21:06:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:05.698 21:06:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.698 21:06:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.698 21:06:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.698 21:06:17 accel -- accel/accel.sh@75 -- # killprocess 62449 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@948 -- # '[' -z 62449 ']' 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@952 -- # kill -0 62449 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@953 -- # uname 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62449 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.698 killing process with pid 62449 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62449' 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@967 -- # kill 62449 00:07:05.698 21:06:17 accel -- common/autotest_common.sh@972 -- # wait 62449 00:07:07.601 21:06:18 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:07.601 21:06:18 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.601 21:06:18 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:07.601 21:06:18 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:07.601 21:06:18 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.601 21:06:18 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.601 21:06:18 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.601 21:06:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.601 ************************************ 00:07:07.601 START TEST accel_missing_filename 00:07:07.601 ************************************ 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.601 21:06:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:07.601 21:06:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:07.601 [2024-07-14 21:06:18.978592] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.601 [2024-07-14 21:06:18.978802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62519 ] 00:07:07.601 [2024-07-14 21:06:19.144179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.860 [2024-07-14 21:06:19.294699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.118 [2024-07-14 21:06:19.443646] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.377 [2024-07-14 21:06:19.832655] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:08.636 A filename is required. 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.636 00:07:08.636 real 0m1.251s 00:07:08.636 user 0m1.036s 00:07:08.636 sys 0m0.147s 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.636 ************************************ 00:07:08.636 END TEST accel_missing_filename 00:07:08.636 ************************************ 00:07:08.636 21:06:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:08.895 21:06:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.895 21:06:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.895 21:06:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:08.895 21:06:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.895 21:06:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.895 ************************************ 00:07:08.895 START TEST accel_compress_verify 00:07:08.895 ************************************ 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.896 21:06:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:08.896 21:06:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:08.896 [2024-07-14 21:06:20.281828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.896 [2024-07-14 21:06:20.282010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62550 ] 00:07:09.155 [2024-07-14 21:06:20.453371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.155 [2024-07-14 21:06:20.614456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.413 [2024-07-14 21:06:20.773578] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.673 [2024-07-14 21:06:21.160507] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:10.241 00:07:10.241 Compression does not support the verify option, aborting. 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:10.241 ************************************ 00:07:10.241 END TEST accel_compress_verify 00:07:10.241 ************************************ 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.241 00:07:10.241 real 0m1.275s 00:07:10.241 user 0m1.076s 00:07:10.241 sys 0m0.140s 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.241 21:06:21 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:10.241 21:06:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.241 21:06:21 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:10.241 21:06:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.241 21:06:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.241 21:06:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.241 ************************************ 00:07:10.241 START TEST accel_wrong_workload 00:07:10.241 ************************************ 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.241 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:10.241 21:06:21 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:10.241 Unsupported workload type: foobar 00:07:10.241 [2024-07-14 21:06:21.609747] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:10.241 accel_perf options: 00:07:10.241 [-h help message] 00:07:10.241 [-q queue depth per core] 00:07:10.241 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.241 [-T number of threads per core 00:07:10.241 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.241 [-t time in seconds] 00:07:10.241 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.241 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:10.241 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.242 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.242 [-S for crc32c workload, use this seed value (default 0) 00:07:10.242 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.242 [-f for fill workload, use this BYTE value (default 255) 00:07:10.242 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.242 [-y verify result if this switch is on] 00:07:10.242 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.242 Can be used to spread operations across a wider range of memory. 00:07:10.242 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:10.242 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.242 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.242 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.242 00:07:10.242 real 0m0.077s 00:07:10.242 user 0m0.088s 00:07:10.242 sys 0m0.039s 00:07:10.242 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.242 21:06:21 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:10.242 ************************************ 00:07:10.242 END TEST accel_wrong_workload 00:07:10.242 ************************************ 00:07:10.242 21:06:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.242 21:06:21 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.242 21:06:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:10.242 21:06:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.242 21:06:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.242 ************************************ 00:07:10.242 START TEST accel_negative_buffers 00:07:10.242 ************************************ 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:10.242 21:06:21 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:10.242 -x option must be non-negative. 00:07:10.242 [2024-07-14 21:06:21.735523] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:10.242 accel_perf options: 00:07:10.242 [-h help message] 00:07:10.242 [-q queue depth per core] 00:07:10.242 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.242 [-T number of threads per core 00:07:10.242 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.242 [-t time in seconds] 00:07:10.242 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.242 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:10.242 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.242 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.242 [-S for crc32c workload, use this seed value (default 0) 00:07:10.242 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.242 [-f for fill workload, use this BYTE value (default 255) 00:07:10.242 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.242 [-y verify result if this switch is on] 00:07:10.242 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.242 Can be used to spread operations across a wider range of memory. 00:07:10.242 ************************************ 00:07:10.242 END TEST accel_negative_buffers 00:07:10.242 ************************************ 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.242 00:07:10.242 real 0m0.079s 00:07:10.242 user 0m0.091s 00:07:10.242 sys 0m0.036s 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.242 21:06:21 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:10.540 21:06:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.540 21:06:21 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:10.540 21:06:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.540 21:06:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.540 21:06:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.540 ************************************ 00:07:10.540 START TEST accel_crc32c 00:07:10.540 ************************************ 00:07:10.540 21:06:21 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:10.540 21:06:21 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:10.540 [2024-07-14 21:06:21.865533] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:10.540 [2024-07-14 21:06:21.865699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62628 ] 00:07:10.540 [2024-07-14 21:06:22.035363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.799 [2024-07-14 21:06:22.185273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.799 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.057 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.058 21:06:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:12.959 21:06:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.959 00:07:12.959 real 0m2.256s 00:07:12.959 user 0m2.021s 00:07:12.959 sys 0m0.139s 00:07:12.959 ************************************ 00:07:12.959 END TEST accel_crc32c 00:07:12.959 ************************************ 00:07:12.959 21:06:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.959 21:06:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:12.959 21:06:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.959 21:06:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:12.959 21:06:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.959 21:06:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.959 21:06:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.959 ************************************ 00:07:12.959 START TEST accel_crc32c_C2 00:07:12.959 ************************************ 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:12.959 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:12.959 [2024-07-14 21:06:24.164737] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:12.959 [2024-07-14 21:06:24.164945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62669 ] 00:07:12.959 [2024-07-14 21:06:24.312127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.959 [2024-07-14 21:06:24.468174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.218 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.219 21:06:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.120 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.121 00:07:15.121 real 0m2.197s 00:07:15.121 user 0m1.996s 00:07:15.121 sys 0m0.108s 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.121 21:06:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:15.121 ************************************ 00:07:15.121 END TEST accel_crc32c_C2 00:07:15.121 ************************************ 00:07:15.121 21:06:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.121 21:06:26 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:15.121 21:06:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.121 21:06:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.121 21:06:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.121 ************************************ 00:07:15.121 START TEST accel_copy 00:07:15.121 ************************************ 00:07:15.121 21:06:26 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.121 21:06:26 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.121 [2024-07-14 21:06:26.424609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:15.121 [2024-07-14 21:06:26.424791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62711 ] 00:07:15.121 [2024-07-14 21:06:26.594321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.381 [2024-07-14 21:06:26.744966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.381 21:06:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:17.287 21:06:28 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.287 00:07:17.287 real 0m2.234s 00:07:17.287 user 0m2.006s 00:07:17.287 sys 0m0.134s 00:07:17.287 ************************************ 00:07:17.287 END TEST accel_copy 00:07:17.287 ************************************ 00:07:17.287 21:06:28 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.287 21:06:28 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.287 21:06:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.287 21:06:28 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:17.287 21:06:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:17.287 21:06:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.287 21:06:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.287 ************************************ 00:07:17.287 START TEST accel_fill 00:07:17.287 ************************************ 00:07:17.287 21:06:28 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:17.287 21:06:28 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:17.287 [2024-07-14 21:06:28.705051] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.288 [2024-07-14 21:06:28.705214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62752 ] 00:07:17.547 [2024-07-14 21:06:28.875717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.547 [2024-07-14 21:06:29.038237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:17.806 21:06:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:19.740 21:06:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.740 00:07:19.740 real 0m2.261s 00:07:19.740 user 0m2.022s 00:07:19.740 sys 0m0.143s 00:07:19.740 21:06:30 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.740 ************************************ 00:07:19.740 END TEST accel_fill 00:07:19.740 ************************************ 00:07:19.740 21:06:30 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 21:06:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.740 21:06:30 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:19.740 21:06:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.740 21:06:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.740 21:06:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 ************************************ 00:07:19.740 START TEST accel_copy_crc32c 00:07:19.740 ************************************ 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:19.740 21:06:30 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:19.740 [2024-07-14 21:06:31.037182] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.740 [2024-07-14 21:06:31.038684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62795 ] 00:07:19.740 [2024-07-14 21:06:31.210913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.020 [2024-07-14 21:06:31.369684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.020 21:06:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.927 00:07:21.927 real 0m2.308s 00:07:21.927 user 0m2.051s 00:07:21.927 sys 0m0.155s 00:07:21.927 ************************************ 00:07:21.927 END TEST accel_copy_crc32c 00:07:21.927 ************************************ 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.927 21:06:33 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:21.927 21:06:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.927 21:06:33 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:21.927 21:06:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:21.927 21:06:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.927 21:06:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.927 ************************************ 00:07:21.927 START TEST accel_copy_crc32c_C2 00:07:21.927 ************************************ 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.927 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:21.927 [2024-07-14 21:06:33.379506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.927 [2024-07-14 21:06:33.379680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62845 ] 00:07:22.187 [2024-07-14 21:06:33.550734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.187 [2024-07-14 21:06:33.715548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.446 21:06:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.352 00:07:24.352 real 0m2.311s 00:07:24.352 user 0m2.075s 00:07:24.352 sys 0m0.142s 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.352 21:06:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:24.352 ************************************ 00:07:24.352 END TEST accel_copy_crc32c_C2 00:07:24.352 ************************************ 00:07:24.352 21:06:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.352 21:06:35 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:24.352 21:06:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.352 21:06:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.352 21:06:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.352 ************************************ 00:07:24.352 START TEST accel_dualcast 00:07:24.352 ************************************ 00:07:24.352 21:06:35 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:24.352 21:06:35 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:24.352 [2024-07-14 21:06:35.744134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:24.352 [2024-07-14 21:06:35.744316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62886 ] 00:07:24.611 [2024-07-14 21:06:35.912059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.611 [2024-07-14 21:06:36.081009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.871 21:06:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:26.776 21:06:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.776 00:07:26.776 real 0m2.307s 00:07:26.776 user 0m2.075s 00:07:26.776 sys 0m0.132s 00:07:26.776 21:06:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.776 21:06:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 ************************************ 00:07:26.776 END TEST accel_dualcast 00:07:26.776 ************************************ 00:07:26.776 21:06:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.776 21:06:38 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:26.776 21:06:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:26.776 21:06:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.776 21:06:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 ************************************ 00:07:26.776 START TEST accel_compare 00:07:26.776 ************************************ 00:07:26.776 21:06:38 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:26.776 21:06:38 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:26.776 [2024-07-14 21:06:38.102207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:26.776 [2024-07-14 21:06:38.102405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62927 ] 00:07:26.776 [2024-07-14 21:06:38.275060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.035 [2024-07-14 21:06:38.463624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.295 21:06:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:29.198 21:06:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.198 ************************************ 00:07:29.198 END TEST accel_compare 00:07:29.198 ************************************ 00:07:29.198 00:07:29.198 real 0m2.401s 00:07:29.198 user 0m2.160s 00:07:29.198 sys 0m0.142s 00:07:29.198 21:06:40 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.198 21:06:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:29.198 21:06:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.198 21:06:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:29.198 21:06:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.198 21:06:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.198 21:06:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.198 ************************************ 00:07:29.198 START TEST accel_xor 00:07:29.198 ************************************ 00:07:29.198 21:06:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:29.198 21:06:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:29.198 [2024-07-14 21:06:40.557344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.198 [2024-07-14 21:06:40.557494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:07:29.198 [2024-07-14 21:06:40.728085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.457 [2024-07-14 21:06:40.909836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.716 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.717 21:06:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.619 00:07:31.619 real 0m2.342s 00:07:31.619 user 0m2.098s 00:07:31.619 sys 0m0.148s 00:07:31.619 21:06:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.619 21:06:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:31.619 ************************************ 00:07:31.619 END TEST accel_xor 00:07:31.619 ************************************ 00:07:31.619 21:06:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.619 21:06:42 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:31.619 21:06:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:31.619 21:06:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.619 21:06:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.619 ************************************ 00:07:31.619 START TEST accel_xor 00:07:31.619 ************************************ 00:07:31.619 21:06:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:31.619 21:06:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:31.619 [2024-07-14 21:06:42.936978] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:31.619 [2024-07-14 21:06:42.937120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63020 ] 00:07:31.619 [2024-07-14 21:06:43.089744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.880 [2024-07-14 21:06:43.255973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:31.880 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.139 21:06:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:34.058 21:06:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.058 00:07:34.058 real 0m2.221s 00:07:34.058 user 0m2.010s 00:07:34.058 sys 0m0.117s 00:07:34.058 21:06:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.058 21:06:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 END TEST accel_xor 00:07:34.058 ************************************ 00:07:34.058 21:06:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.058 21:06:45 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:34.058 21:06:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:34.058 21:06:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.058 21:06:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 START TEST accel_dif_verify 00:07:34.058 ************************************ 00:07:34.058 21:06:45 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:34.058 21:06:45 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:34.058 [2024-07-14 21:06:45.223728] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:34.058 [2024-07-14 21:06:45.223902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63061 ] 00:07:34.058 [2024-07-14 21:06:45.391120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.058 [2024-07-14 21:06:45.548187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.339 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:34.340 21:06:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.243 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.244 21:06:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.244 00:07:36.244 real 0m2.229s 00:07:36.244 user 0m1.997s 00:07:36.244 sys 0m0.141s 00:07:36.244 ************************************ 00:07:36.244 END TEST accel_dif_verify 00:07:36.244 ************************************ 00:07:36.244 21:06:47 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.244 21:06:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.244 21:06:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.244 21:06:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.244 21:06:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:36.244 21:06:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.244 21:06:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.244 ************************************ 00:07:36.244 START TEST accel_dif_generate 00:07:36.244 ************************************ 00:07:36.244 21:06:47 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.244 21:06:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.244 [2024-07-14 21:06:47.500585] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.244 [2024-07-14 21:06:47.500836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:07:36.244 [2024-07-14 21:06:47.669829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.503 [2024-07-14 21:06:47.861129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.503 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.764 21:06:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.669 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:38.670 21:06:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.670 00:07:38.670 real 0m2.315s 00:07:38.670 user 0m2.061s 00:07:38.670 sys 0m0.157s 00:07:38.670 21:06:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.670 21:06:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:38.670 ************************************ 00:07:38.670 END TEST accel_dif_generate 00:07:38.670 ************************************ 00:07:38.670 21:06:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.670 21:06:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.670 21:06:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.670 21:06:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.670 21:06:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.670 ************************************ 00:07:38.670 START TEST accel_dif_generate_copy 00:07:38.670 ************************************ 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.670 21:06:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.670 [2024-07-14 21:06:49.872502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.670 [2024-07-14 21:06:49.872862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63149 ] 00:07:38.670 [2024-07-14 21:06:50.042325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.670 [2024-07-14 21:06:50.199295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.930 21:06:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.835 00:07:40.835 real 0m2.242s 00:07:40.835 user 0m0.017s 00:07:40.835 sys 0m0.004s 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.835 ************************************ 00:07:40.835 END TEST accel_dif_generate_copy 00:07:40.835 ************************************ 00:07:40.835 21:06:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.835 21:06:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.835 21:06:52 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:40.835 21:06:52 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.835 21:06:52 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:40.835 21:06:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.835 21:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.835 ************************************ 00:07:40.835 START TEST accel_comp 00:07:40.835 ************************************ 00:07:40.835 21:06:52 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:40.835 21:06:52 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:40.835 [2024-07-14 21:06:52.165674] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:40.835 [2024-07-14 21:06:52.165888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63190 ] 00:07:40.835 [2024-07-14 21:06:52.335402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.094 [2024-07-14 21:06:52.490428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.353 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.354 21:06:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:43.256 21:06:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.256 00:07:43.256 real 0m2.271s 00:07:43.256 user 0m2.037s 00:07:43.256 sys 0m0.139s 00:07:43.256 21:06:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.256 ************************************ 00:07:43.256 END TEST accel_comp 00:07:43.256 ************************************ 00:07:43.256 21:06:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:43.256 21:06:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.256 21:06:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.256 21:06:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:43.256 21:06:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.256 21:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.256 ************************************ 00:07:43.256 START TEST accel_decomp 00:07:43.256 ************************************ 00:07:43.256 21:06:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:43.256 21:06:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:43.256 [2024-07-14 21:06:54.490973] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.256 [2024-07-14 21:06:54.491136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63236 ] 00:07:43.257 [2024-07-14 21:06:54.660017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.516 [2024-07-14 21:06:54.814902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.516 21:06:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.418 21:06:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.418 00:07:45.418 real 0m2.259s 00:07:45.418 user 0m2.026s 00:07:45.418 sys 0m0.139s 00:07:45.418 21:06:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.418 ************************************ 00:07:45.418 END TEST accel_decomp 00:07:45.418 ************************************ 00:07:45.418 21:06:56 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:45.418 21:06:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.418 21:06:56 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.418 21:06:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:45.418 21:06:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.418 21:06:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.418 ************************************ 00:07:45.418 START TEST accel_decomp_full 00:07:45.418 ************************************ 00:07:45.418 21:06:56 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.418 21:06:56 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.419 21:06:56 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:45.419 21:06:56 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:45.419 [2024-07-14 21:06:56.800334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.419 [2024-07-14 21:06:56.800494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63283 ] 00:07:45.676 [2024-07-14 21:06:56.967281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.676 [2024-07-14 21:06:57.116782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.933 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.934 21:06:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.833 21:06:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.833 21:06:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.833 00:07:47.833 real 0m2.261s 00:07:47.833 user 0m2.024s 00:07:47.833 sys 0m0.145s 00:07:47.833 21:06:59 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.833 ************************************ 00:07:47.833 END TEST accel_decomp_full 00:07:47.833 ************************************ 00:07:47.833 21:06:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:47.833 21:06:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.833 21:06:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.833 21:06:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:47.833 21:06:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.833 21:06:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.833 ************************************ 00:07:47.833 START TEST accel_decomp_mcore 00:07:47.833 ************************************ 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:47.833 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:47.833 [2024-07-14 21:06:59.111822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:47.833 [2024-07-14 21:06:59.111984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63324 ] 00:07:47.833 [2024-07-14 21:06:59.277774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.091 [2024-07-14 21:06:59.430738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.091 [2024-07-14 21:06:59.430875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.091 [2024-07-14 21:06:59.431005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.091 [2024-07-14 21:06:59.431020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.091 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.092 21:06:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.997 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.998 00:07:49.998 real 0m2.311s 00:07:49.998 user 0m0.018s 00:07:49.998 sys 0m0.006s 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.998 ************************************ 00:07:49.998 END TEST accel_decomp_mcore 00:07:49.998 ************************************ 00:07:49.998 21:07:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:49.998 21:07:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.998 21:07:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 21:07:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:49.998 21:07:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.998 21:07:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.998 ************************************ 00:07:49.998 START TEST accel_decomp_full_mcore 00:07:49.998 ************************************ 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:49.998 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:49.998 [2024-07-14 21:07:01.473418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:49.998 [2024-07-14 21:07:01.473578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63368 ] 00:07:50.258 [2024-07-14 21:07:01.643751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.258 [2024-07-14 21:07:01.803828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.258 [2024-07-14 21:07:01.803986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.258 [2024-07-14 21:07:01.804050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.258 [2024-07-14 21:07:01.804264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.518 21:07:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.420 ************************************ 00:07:52.420 END TEST accel_decomp_full_mcore 00:07:52.420 ************************************ 00:07:52.420 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.421 21:07:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.421 00:07:52.421 real 0m2.359s 00:07:52.421 user 0m0.019s 00:07:52.421 sys 0m0.003s 00:07:52.421 21:07:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.421 21:07:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:52.421 21:07:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.421 21:07:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.421 21:07:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:52.421 21:07:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.421 21:07:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.421 ************************************ 00:07:52.421 START TEST accel_decomp_mthread 00:07:52.421 ************************************ 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:52.421 21:07:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:52.421 [2024-07-14 21:07:03.878344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.421 [2024-07-14 21:07:03.878524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:07:52.679 [2024-07-14 21:07:04.038697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.679 [2024-07-14 21:07:04.184872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.938 21:07:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.866 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.867 00:07:54.867 real 0m2.250s 00:07:54.867 user 0m2.019s 00:07:54.867 sys 0m0.137s 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.867 21:07:06 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:54.867 ************************************ 00:07:54.867 END TEST accel_decomp_mthread 00:07:54.867 ************************************ 00:07:54.867 21:07:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.867 21:07:06 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.867 21:07:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:54.867 21:07:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.867 21:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.867 ************************************ 00:07:54.867 START TEST accel_decomp_full_mthread 00:07:54.867 ************************************ 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:54.867 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:54.867 [2024-07-14 21:07:06.187561] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.867 [2024-07-14 21:07:06.187718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:07:54.867 [2024-07-14 21:07:06.353339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.125 [2024-07-14 21:07:06.505652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.125 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.125 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.126 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.385 21:07:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.287 00:07:57.287 real 0m2.279s 00:07:57.287 user 0m2.047s 00:07:57.287 sys 0m0.138s 00:07:57.287 ************************************ 00:07:57.287 END TEST accel_decomp_full_mthread 00:07:57.287 ************************************ 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.287 21:07:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:57.287 21:07:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.287 21:07:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:57.287 21:07:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.287 21:07:08 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:57.287 21:07:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.287 21:07:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:57.287 21:07:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.287 21:07:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.287 21:07:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.287 21:07:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.287 21:07:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.287 21:07:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.287 21:07:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:57.287 21:07:08 accel -- accel/accel.sh@41 -- # jq -r . 00:07:57.287 ************************************ 00:07:57.287 START TEST accel_dif_functional_tests 00:07:57.287 ************************************ 00:07:57.287 21:07:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.287 [2024-07-14 21:07:08.570608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.287 [2024-07-14 21:07:08.570805] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63506 ] 00:07:57.287 [2024-07-14 21:07:08.739090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.546 [2024-07-14 21:07:08.902506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.546 [2024-07-14 21:07:08.902616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.546 [2024-07-14 21:07:08.902641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.546 [2024-07-14 21:07:09.061654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.805 00:07:57.805 00:07:57.805 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.805 http://cunit.sourceforge.net/ 00:07:57.805 00:07:57.805 00:07:57.805 Suite: accel_dif 00:07:57.805 Test: verify: DIF generated, GUARD check ...passed 00:07:57.805 Test: verify: DIF generated, APPTAG check ...passed 00:07:57.805 Test: verify: DIF generated, REFTAG check ...passed 00:07:57.805 Test: verify: DIF not generated, GUARD check ...passed 00:07:57.805 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 21:07:09.144388] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.805 passed 00:07:57.805 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 21:07:09.144506] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.805 passed 00:07:57.805 Test: verify: APPTAG correct, APPTAG check ...[2024-07-14 21:07:09.144813] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.805 passed 00:07:57.805 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:57.805 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:57.805 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:57.805 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:57.805 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:57.805 Test: verify copy: DIF generated, GUARD check ...passed 00:07:57.805 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:57.805 Test: verify copy: DIF generated, REFTAG check ...[2024-07-14 21:07:09.144936] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:57.805 [2024-07-14 21:07:09.145138] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:57.805 passed 00:07:57.805 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:57.805 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 21:07:09.145474] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.805 passed 00:07:57.805 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 21:07:09.145532] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.805 [2024-07-14 21:07:09.145601] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.805 passed 00:07:57.805 Test: generate copy: DIF generated, GUARD check ...passed 00:07:57.805 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:57.805 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:57.805 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:57.805 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:57.805 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:57.805 Test: generate copy: iovecs-len validate ...[2024-07-14 21:07:09.146339] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:57.805 passed 00:07:57.805 Test: generate copy: buffer alignment validate ...passed 00:07:57.805 00:07:57.805 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.805 suites 1 1 n/a 0 0 00:07:57.805 tests 26 26 26 0 0 00:07:57.805 asserts 115 115 115 0 n/a 00:07:57.805 00:07:57.805 Elapsed time = 0.005 seconds 00:07:58.740 ************************************ 00:07:58.740 END TEST accel_dif_functional_tests 00:07:58.740 ************************************ 00:07:58.740 00:07:58.740 real 0m1.654s 00:07:58.740 user 0m3.045s 00:07:58.740 sys 0m0.202s 00:07:58.740 21:07:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.740 21:07:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:58.740 21:07:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.740 ************************************ 00:07:58.740 END TEST accel 00:07:58.740 ************************************ 00:07:58.740 00:07:58.740 real 0m54.323s 00:07:58.740 user 0m59.321s 00:07:58.740 sys 0m4.614s 00:07:58.740 21:07:10 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.740 21:07:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.740 21:07:10 -- common/autotest_common.sh@1142 -- # return 0 00:07:58.740 21:07:10 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:58.740 21:07:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.740 21:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.740 21:07:10 -- common/autotest_common.sh@10 -- # set +x 00:07:58.740 ************************************ 00:07:58.741 START TEST accel_rpc 00:07:58.741 ************************************ 00:07:58.741 21:07:10 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:58.999 * Looking for test storage... 00:07:58.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:58.999 21:07:10 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.999 21:07:10 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63587 00:07:58.999 21:07:10 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63587 00:07:58.999 21:07:10 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:58.999 21:07:10 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63587 ']' 00:07:58.999 21:07:10 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.999 21:07:10 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.999 21:07:10 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.999 21:07:10 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.999 21:07:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.999 [2024-07-14 21:07:10.394901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.999 [2024-07-14 21:07:10.395038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63587 ] 00:07:59.257 [2024-07-14 21:07:10.553755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.257 [2024-07-14 21:07:10.706649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.824 21:07:11 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.824 21:07:11 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:59.824 21:07:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:59.824 21:07:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:59.824 21:07:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:59.824 21:07:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:59.824 21:07:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:59.824 21:07:11 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.824 21:07:11 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.824 21:07:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.824 ************************************ 00:07:59.824 START TEST accel_assign_opcode 00:07:59.824 ************************************ 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.824 [2024-07-14 21:07:11.327607] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.824 [2024-07-14 21:07:11.335600] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.824 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:00.083 [2024-07-14 21:07:11.493977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.651 software 00:08:00.651 ************************************ 00:08:00.651 END TEST accel_assign_opcode 00:08:00.651 ************************************ 00:08:00.651 00:08:00.651 real 0m0.641s 00:08:00.651 user 0m0.059s 00:08:00.651 sys 0m0.010s 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.651 21:07:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:00.651 21:07:12 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63587 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63587 ']' 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63587 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63587 00:08:00.651 killing process with pid 63587 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63587' 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@967 -- # kill 63587 00:08:00.651 21:07:12 accel_rpc -- common/autotest_common.sh@972 -- # wait 63587 00:08:02.556 00:08:02.556 real 0m3.559s 00:08:02.556 user 0m3.650s 00:08:02.556 sys 0m0.428s 00:08:02.556 21:07:13 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.556 21:07:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.556 ************************************ 00:08:02.556 END TEST accel_rpc 00:08:02.556 ************************************ 00:08:02.556 21:07:13 -- common/autotest_common.sh@1142 -- # return 0 00:08:02.556 21:07:13 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:02.556 21:07:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.556 21:07:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.556 21:07:13 -- common/autotest_common.sh@10 -- # set +x 00:08:02.556 ************************************ 00:08:02.556 START TEST app_cmdline 00:08:02.556 ************************************ 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:02.556 * Looking for test storage... 00:08:02.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:02.556 21:07:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:02.556 21:07:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63699 00:08:02.556 21:07:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63699 00:08:02.556 21:07:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63699 ']' 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.556 21:07:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:02.556 [2024-07-14 21:07:14.034257] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:02.556 [2024-07-14 21:07:14.034711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63699 ] 00:08:02.816 [2024-07-14 21:07:14.203355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.816 [2024-07-14 21:07:14.348853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.075 [2024-07-14 21:07:14.493741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.644 21:07:14 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.644 21:07:14 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:03.644 21:07:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:03.644 { 00:08:03.644 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:03.644 "fields": { 00:08:03.644 "major": 24, 00:08:03.644 "minor": 9, 00:08:03.644 "patch": 0, 00:08:03.644 "suffix": "-pre", 00:08:03.644 "commit": "719d03c6a" 00:08:03.644 } 00:08:03.644 } 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:03.644 21:07:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:03.644 21:07:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.644 21:07:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:03.644 21:07:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.903 21:07:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:03.903 21:07:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:03.903 21:07:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.903 21:07:15 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:04.163 request: 00:08:04.163 { 00:08:04.163 "method": "env_dpdk_get_mem_stats", 00:08:04.163 "req_id": 1 00:08:04.163 } 00:08:04.163 Got JSON-RPC error response 00:08:04.163 response: 00:08:04.163 { 00:08:04.163 "code": -32601, 00:08:04.163 "message": "Method not found" 00:08:04.163 } 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.163 21:07:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63699 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63699 ']' 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63699 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63699 00:08:04.163 killing process with pid 63699 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63699' 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@967 -- # kill 63699 00:08:04.163 21:07:15 app_cmdline -- common/autotest_common.sh@972 -- # wait 63699 00:08:06.068 ************************************ 00:08:06.068 END TEST app_cmdline 00:08:06.068 ************************************ 00:08:06.068 00:08:06.068 real 0m3.425s 00:08:06.068 user 0m3.904s 00:08:06.068 sys 0m0.474s 00:08:06.068 21:07:17 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.068 21:07:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 21:07:17 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.068 21:07:17 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:06.068 21:07:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.068 21:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.068 21:07:17 -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 ************************************ 00:08:06.068 START TEST version 00:08:06.068 ************************************ 00:08:06.068 21:07:17 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:06.068 * Looking for test storage... 00:08:06.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:06.068 21:07:17 version -- app/version.sh@17 -- # get_header_version major 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # cut -f2 00:08:06.068 21:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.068 21:07:17 version -- app/version.sh@17 -- # major=24 00:08:06.068 21:07:17 version -- app/version.sh@18 -- # get_header_version minor 00:08:06.068 21:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # cut -f2 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.068 21:07:17 version -- app/version.sh@18 -- # minor=9 00:08:06.068 21:07:17 version -- app/version.sh@19 -- # get_header_version patch 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # cut -f2 00:08:06.068 21:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.068 21:07:17 version -- app/version.sh@19 -- # patch=0 00:08:06.068 21:07:17 version -- app/version.sh@20 -- # get_header_version suffix 00:08:06.068 21:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.068 21:07:17 version -- app/version.sh@14 -- # cut -f2 00:08:06.068 21:07:17 version -- app/version.sh@20 -- # suffix=-pre 00:08:06.068 21:07:17 version -- app/version.sh@22 -- # version=24.9 00:08:06.068 21:07:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:06.068 21:07:17 version -- app/version.sh@28 -- # version=24.9rc0 00:08:06.068 21:07:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:06.068 21:07:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:06.068 21:07:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:06.068 21:07:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:06.068 00:08:06.068 real 0m0.160s 00:08:06.068 user 0m0.092s 00:08:06.068 sys 0m0.100s 00:08:06.068 21:07:17 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.068 21:07:17 version -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 ************************************ 00:08:06.068 END TEST version 00:08:06.068 ************************************ 00:08:06.068 21:07:17 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.068 21:07:17 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:06.068 21:07:17 -- spdk/autotest.sh@198 -- # uname -s 00:08:06.068 21:07:17 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:06.068 21:07:17 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:06.068 21:07:17 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:08:06.068 21:07:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:08:06.068 21:07:17 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:06.068 21:07:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.068 21:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.068 21:07:17 -- common/autotest_common.sh@10 -- # set +x 00:08:06.068 ************************************ 00:08:06.068 START TEST spdk_dd 00:08:06.068 ************************************ 00:08:06.068 21:07:17 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:06.068 * Looking for test storage... 00:08:06.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:06.068 21:07:17 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.068 21:07:17 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.068 21:07:17 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.068 21:07:17 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.068 21:07:17 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.069 21:07:17 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.069 21:07:17 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.069 21:07:17 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:06.069 21:07:17 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.069 21:07:17 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:06.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:06.637 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:06.637 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:06.637 21:07:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:06.637 21:07:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@230 -- # local class 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:08:06.637 21:07:17 spdk_dd -- scripts/common.sh@232 -- # local progif 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@233 -- # class=01 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:08:06.637 21:07:18 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:06.637 21:07:18 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@139 -- # local lib so 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.637 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:06.638 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:06.639 * spdk_dd linked to liburing 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:06.639 21:07:18 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:08:06.639 21:07:18 spdk_dd -- dd/common.sh@157 -- # return 0 00:08:06.639 21:07:18 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:06.639 21:07:18 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:06.639 21:07:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:06.639 21:07:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.639 21:07:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:06.639 ************************************ 00:08:06.639 START TEST spdk_dd_basic_rw 00:08:06.639 ************************************ 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:06.639 * Looking for test storage... 00:08:06.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:06.639 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:06.640 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:06.640 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:06.640 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:06.640 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:06.640 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.640 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.898 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:06.898 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:06.898 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:06.898 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:07.160 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:07.160 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.161 ************************************ 00:08:07.161 START TEST dd_bs_lt_native_bs 00:08:07.161 ************************************ 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.161 21:07:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:07.161 { 00:08:07.161 "subsystems": [ 00:08:07.161 { 00:08:07.161 "subsystem": "bdev", 00:08:07.161 "config": [ 00:08:07.161 { 00:08:07.161 "params": { 00:08:07.161 "trtype": "pcie", 00:08:07.161 "traddr": "0000:00:10.0", 00:08:07.161 "name": "Nvme0" 00:08:07.161 }, 00:08:07.161 "method": "bdev_nvme_attach_controller" 00:08:07.161 }, 00:08:07.161 { 00:08:07.161 "method": "bdev_wait_for_examine" 00:08:07.161 } 00:08:07.161 ] 00:08:07.161 } 00:08:07.161 ] 00:08:07.161 } 00:08:07.161 [2024-07-14 21:07:18.577865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.161 [2024-07-14 21:07:18.578044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64032 ] 00:08:07.420 [2024-07-14 21:07:18.751294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.679 [2024-07-14 21:07:18.977031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.679 [2024-07-14 21:07:19.127541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.938 [2024-07-14 21:07:19.281637] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:07.938 [2024-07-14 21:07:19.281741] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.196 [2024-07-14 21:07:19.697928] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.763 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:08:08.763 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.763 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:08:08.763 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.763 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:08:08.763 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.763 00:08:08.763 real 0m1.587s 00:08:08.763 user 0m1.318s 00:08:08.763 sys 0m0.216s 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:08.764 ************************************ 00:08:08.764 END TEST dd_bs_lt_native_bs 00:08:08.764 ************************************ 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.764 ************************************ 00:08:08.764 START TEST dd_rw 00:08:08.764 ************************************ 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:08.764 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.331 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:09.331 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:09.331 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.331 21:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.331 { 00:08:09.331 "subsystems": [ 00:08:09.331 { 00:08:09.331 "subsystem": "bdev", 00:08:09.331 "config": [ 00:08:09.331 { 00:08:09.331 "params": { 00:08:09.331 "trtype": "pcie", 00:08:09.331 "traddr": "0000:00:10.0", 00:08:09.331 "name": "Nvme0" 00:08:09.331 }, 00:08:09.331 "method": "bdev_nvme_attach_controller" 00:08:09.331 }, 00:08:09.331 { 00:08:09.331 "method": "bdev_wait_for_examine" 00:08:09.331 } 00:08:09.331 ] 00:08:09.331 } 00:08:09.331 ] 00:08:09.331 } 00:08:09.331 [2024-07-14 21:07:20.799866] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:09.331 [2024-07-14 21:07:20.800028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64075 ] 00:08:09.594 [2024-07-14 21:07:20.970377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.594 [2024-07-14 21:07:21.130981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.851 [2024-07-14 21:07:21.280601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.074  Copying: 60/60 [kB] (average 19 MBps) 00:08:11.074 00:08:11.074 21:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:11.074 21:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:11.074 21:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.074 21:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.074 { 00:08:11.074 "subsystems": [ 00:08:11.074 { 00:08:11.074 "subsystem": "bdev", 00:08:11.074 "config": [ 00:08:11.074 { 00:08:11.074 "params": { 00:08:11.074 "trtype": "pcie", 00:08:11.074 "traddr": "0000:00:10.0", 00:08:11.074 "name": "Nvme0" 00:08:11.074 }, 00:08:11.074 "method": "bdev_nvme_attach_controller" 00:08:11.074 }, 00:08:11.074 { 00:08:11.074 "method": "bdev_wait_for_examine" 00:08:11.074 } 00:08:11.074 ] 00:08:11.074 } 00:08:11.074 ] 00:08:11.074 } 00:08:11.074 [2024-07-14 21:07:22.507773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.074 [2024-07-14 21:07:22.507939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64106 ] 00:08:11.333 [2024-07-14 21:07:22.678250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.333 [2024-07-14 21:07:22.823704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.592 [2024-07-14 21:07:22.967397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.532  Copying: 60/60 [kB] (average 14 MBps) 00:08:12.532 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.532 21:07:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.532 { 00:08:12.532 "subsystems": [ 00:08:12.532 { 00:08:12.532 "subsystem": "bdev", 00:08:12.532 "config": [ 00:08:12.532 { 00:08:12.532 "params": { 00:08:12.532 "trtype": "pcie", 00:08:12.532 "traddr": "0000:00:10.0", 00:08:12.532 "name": "Nvme0" 00:08:12.532 }, 00:08:12.532 "method": "bdev_nvme_attach_controller" 00:08:12.532 }, 00:08:12.532 { 00:08:12.532 "method": "bdev_wait_for_examine" 00:08:12.532 } 00:08:12.532 ] 00:08:12.532 } 00:08:12.532 ] 00:08:12.532 } 00:08:12.532 [2024-07-14 21:07:24.003165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.532 [2024-07-14 21:07:24.003324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64128 ] 00:08:12.791 [2024-07-14 21:07:24.171812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.791 [2024-07-14 21:07:24.331080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.050 [2024-07-14 21:07:24.476369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.245  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:14.245 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:14.245 21:07:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.811 21:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:14.811 21:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:14.811 21:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.811 21:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.811 { 00:08:14.811 "subsystems": [ 00:08:14.811 { 00:08:14.811 "subsystem": "bdev", 00:08:14.811 "config": [ 00:08:14.811 { 00:08:14.811 "params": { 00:08:14.811 "trtype": "pcie", 00:08:14.811 "traddr": "0000:00:10.0", 00:08:14.811 "name": "Nvme0" 00:08:14.811 }, 00:08:14.811 "method": "bdev_nvme_attach_controller" 00:08:14.811 }, 00:08:14.811 { 00:08:14.811 "method": "bdev_wait_for_examine" 00:08:14.811 } 00:08:14.811 ] 00:08:14.811 } 00:08:14.811 ] 00:08:14.811 } 00:08:14.811 [2024-07-14 21:07:26.241080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:14.811 [2024-07-14 21:07:26.241238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64164 ] 00:08:15.069 [2024-07-14 21:07:26.414857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.328 [2024-07-14 21:07:26.644706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.328 [2024-07-14 21:07:26.796575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.523  Copying: 60/60 [kB] (average 58 MBps) 00:08:16.523 00:08:16.523 21:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:16.523 21:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:16.523 21:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:16.523 21:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.523 { 00:08:16.523 "subsystems": [ 00:08:16.523 { 00:08:16.523 "subsystem": "bdev", 00:08:16.523 "config": [ 00:08:16.523 { 00:08:16.523 "params": { 00:08:16.523 "trtype": "pcie", 00:08:16.523 "traddr": "0000:00:10.0", 00:08:16.523 "name": "Nvme0" 00:08:16.523 }, 00:08:16.523 "method": "bdev_nvme_attach_controller" 00:08:16.523 }, 00:08:16.523 { 00:08:16.523 "method": "bdev_wait_for_examine" 00:08:16.523 } 00:08:16.523 ] 00:08:16.523 } 00:08:16.523 ] 00:08:16.523 } 00:08:16.523 [2024-07-14 21:07:27.831952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:16.523 [2024-07-14 21:07:27.832148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64190 ] 00:08:16.523 [2024-07-14 21:07:28.002032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.782 [2024-07-14 21:07:28.151969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.782 [2024-07-14 21:07:28.307860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.977  Copying: 60/60 [kB] (average 58 MBps) 00:08:17.977 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:17.977 21:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 { 00:08:17.977 "subsystems": [ 00:08:17.977 { 00:08:17.977 "subsystem": "bdev", 00:08:17.977 "config": [ 00:08:17.977 { 00:08:17.977 "params": { 00:08:17.977 "trtype": "pcie", 00:08:17.977 "traddr": "0000:00:10.0", 00:08:17.977 "name": "Nvme0" 00:08:17.977 }, 00:08:17.977 "method": "bdev_nvme_attach_controller" 00:08:17.977 }, 00:08:17.977 { 00:08:17.977 "method": "bdev_wait_for_examine" 00:08:17.977 } 00:08:17.977 ] 00:08:17.977 } 00:08:17.977 ] 00:08:17.977 } 00:08:18.236 [2024-07-14 21:07:29.541977] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:18.236 [2024-07-14 21:07:29.542132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64218 ] 00:08:18.236 [2024-07-14 21:07:29.701798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.494 [2024-07-14 21:07:29.850946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.494 [2024-07-14 21:07:29.998817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.684  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:19.684 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:19.684 21:07:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.249 21:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:20.249 21:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:20.249 21:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:20.249 21:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.249 { 00:08:20.249 "subsystems": [ 00:08:20.249 { 00:08:20.249 "subsystem": "bdev", 00:08:20.249 "config": [ 00:08:20.249 { 00:08:20.249 "params": { 00:08:20.249 "trtype": "pcie", 00:08:20.249 "traddr": "0000:00:10.0", 00:08:20.249 "name": "Nvme0" 00:08:20.249 }, 00:08:20.249 "method": "bdev_nvme_attach_controller" 00:08:20.249 }, 00:08:20.249 { 00:08:20.249 "method": "bdev_wait_for_examine" 00:08:20.249 } 00:08:20.249 ] 00:08:20.249 } 00:08:20.249 ] 00:08:20.249 } 00:08:20.249 [2024-07-14 21:07:31.593717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:20.249 [2024-07-14 21:07:31.593915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64249 ] 00:08:20.249 [2024-07-14 21:07:31.765738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.506 [2024-07-14 21:07:31.929523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.764 [2024-07-14 21:07:32.091350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.697  Copying: 56/56 [kB] (average 54 MBps) 00:08:21.697 00:08:21.955 21:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:21.955 21:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:21.956 21:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:21.956 21:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:21.956 { 00:08:21.956 "subsystems": [ 00:08:21.956 { 00:08:21.956 "subsystem": "bdev", 00:08:21.956 "config": [ 00:08:21.956 { 00:08:21.956 "params": { 00:08:21.956 "trtype": "pcie", 00:08:21.956 "traddr": "0000:00:10.0", 00:08:21.956 "name": "Nvme0" 00:08:21.956 }, 00:08:21.956 "method": "bdev_nvme_attach_controller" 00:08:21.956 }, 00:08:21.956 { 00:08:21.956 "method": "bdev_wait_for_examine" 00:08:21.956 } 00:08:21.956 ] 00:08:21.956 } 00:08:21.956 ] 00:08:21.956 } 00:08:21.956 [2024-07-14 21:07:33.363320] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:21.956 [2024-07-14 21:07:33.363485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64274 ] 00:08:22.214 [2024-07-14 21:07:33.526942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.214 [2024-07-14 21:07:33.686131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.472 [2024-07-14 21:07:33.854113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.408  Copying: 56/56 [kB] (average 27 MBps) 00:08:23.408 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:23.408 21:07:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.408 { 00:08:23.408 "subsystems": [ 00:08:23.408 { 00:08:23.408 "subsystem": "bdev", 00:08:23.408 "config": [ 00:08:23.408 { 00:08:23.408 "params": { 00:08:23.408 "trtype": "pcie", 00:08:23.409 "traddr": "0000:00:10.0", 00:08:23.409 "name": "Nvme0" 00:08:23.409 }, 00:08:23.409 "method": "bdev_nvme_attach_controller" 00:08:23.409 }, 00:08:23.409 { 00:08:23.409 "method": "bdev_wait_for_examine" 00:08:23.409 } 00:08:23.409 ] 00:08:23.409 } 00:08:23.409 ] 00:08:23.409 } 00:08:23.667 [2024-07-14 21:07:34.961733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:23.667 [2024-07-14 21:07:34.961937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64307 ] 00:08:23.667 [2024-07-14 21:07:35.131465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.926 [2024-07-14 21:07:35.290262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.926 [2024-07-14 21:07:35.447658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.121  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:25.121 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:25.121 21:07:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:25.689 21:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:25.689 21:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:25.689 21:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:25.689 21:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:25.689 { 00:08:25.689 "subsystems": [ 00:08:25.689 { 00:08:25.689 "subsystem": "bdev", 00:08:25.689 "config": [ 00:08:25.689 { 00:08:25.689 "params": { 00:08:25.689 "trtype": "pcie", 00:08:25.689 "traddr": "0000:00:10.0", 00:08:25.689 "name": "Nvme0" 00:08:25.689 }, 00:08:25.689 "method": "bdev_nvme_attach_controller" 00:08:25.689 }, 00:08:25.689 { 00:08:25.689 "method": "bdev_wait_for_examine" 00:08:25.689 } 00:08:25.689 ] 00:08:25.689 } 00:08:25.689 ] 00:08:25.689 } 00:08:25.689 [2024-07-14 21:07:37.218483] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:25.689 [2024-07-14 21:07:37.218680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64338 ] 00:08:25.947 [2024-07-14 21:07:37.388493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.206 [2024-07-14 21:07:37.553098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.206 [2024-07-14 21:07:37.700241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.408  Copying: 56/56 [kB] (average 54 MBps) 00:08:27.408 00:08:27.408 21:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:27.408 21:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:27.408 21:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.408 21:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 { 00:08:27.408 "subsystems": [ 00:08:27.408 { 00:08:27.408 "subsystem": "bdev", 00:08:27.408 "config": [ 00:08:27.408 { 00:08:27.408 "params": { 00:08:27.408 "trtype": "pcie", 00:08:27.408 "traddr": "0000:00:10.0", 00:08:27.408 "name": "Nvme0" 00:08:27.408 }, 00:08:27.408 "method": "bdev_nvme_attach_controller" 00:08:27.408 }, 00:08:27.408 { 00:08:27.408 "method": "bdev_wait_for_examine" 00:08:27.408 } 00:08:27.408 ] 00:08:27.408 } 00:08:27.408 ] 00:08:27.408 } 00:08:27.408 [2024-07-14 21:07:38.773584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:27.408 [2024-07-14 21:07:38.773772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64364 ] 00:08:27.408 [2024-07-14 21:07:38.941794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.682 [2024-07-14 21:07:39.103369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.947 [2024-07-14 21:07:39.259075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.884  Copying: 56/56 [kB] (average 54 MBps) 00:08:28.884 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:28.884 21:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.143 { 00:08:29.143 "subsystems": [ 00:08:29.143 { 00:08:29.143 "subsystem": "bdev", 00:08:29.143 "config": [ 00:08:29.143 { 00:08:29.143 "params": { 00:08:29.143 "trtype": "pcie", 00:08:29.143 "traddr": "0000:00:10.0", 00:08:29.143 "name": "Nvme0" 00:08:29.143 }, 00:08:29.143 "method": "bdev_nvme_attach_controller" 00:08:29.143 }, 00:08:29.143 { 00:08:29.143 "method": "bdev_wait_for_examine" 00:08:29.143 } 00:08:29.143 ] 00:08:29.143 } 00:08:29.143 ] 00:08:29.143 } 00:08:29.143 [2024-07-14 21:07:40.499486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:29.143 [2024-07-14 21:07:40.499654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64391 ] 00:08:29.143 [2024-07-14 21:07:40.670678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.402 [2024-07-14 21:07:40.836695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.662 [2024-07-14 21:07:40.995975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.599  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:30.599 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:30.599 21:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:31.168 21:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:31.168 21:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:31.168 21:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:31.168 21:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:31.168 { 00:08:31.168 "subsystems": [ 00:08:31.168 { 00:08:31.168 "subsystem": "bdev", 00:08:31.168 "config": [ 00:08:31.168 { 00:08:31.168 "params": { 00:08:31.168 "trtype": "pcie", 00:08:31.168 "traddr": "0000:00:10.0", 00:08:31.168 "name": "Nvme0" 00:08:31.169 }, 00:08:31.169 "method": "bdev_nvme_attach_controller" 00:08:31.169 }, 00:08:31.169 { 00:08:31.169 "method": "bdev_wait_for_examine" 00:08:31.169 } 00:08:31.169 ] 00:08:31.169 } 00:08:31.169 ] 00:08:31.169 } 00:08:31.169 [2024-07-14 21:07:42.540508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.169 [2024-07-14 21:07:42.540641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64422 ] 00:08:31.169 [2024-07-14 21:07:42.694873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.428 [2024-07-14 21:07:42.854011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.687 [2024-07-14 21:07:43.010595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.622  Copying: 48/48 [kB] (average 46 MBps) 00:08:32.622 00:08:32.622 21:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:32.622 21:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:32.622 21:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:32.622 21:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.880 { 00:08:32.880 "subsystems": [ 00:08:32.880 { 00:08:32.880 "subsystem": "bdev", 00:08:32.880 "config": [ 00:08:32.880 { 00:08:32.880 "params": { 00:08:32.880 "trtype": "pcie", 00:08:32.880 "traddr": "0000:00:10.0", 00:08:32.880 "name": "Nvme0" 00:08:32.880 }, 00:08:32.880 "method": "bdev_nvme_attach_controller" 00:08:32.880 }, 00:08:32.880 { 00:08:32.880 "method": "bdev_wait_for_examine" 00:08:32.880 } 00:08:32.880 ] 00:08:32.880 } 00:08:32.880 ] 00:08:32.880 } 00:08:32.880 [2024-07-14 21:07:44.246012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:32.880 [2024-07-14 21:07:44.246174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64448 ] 00:08:32.880 [2024-07-14 21:07:44.420569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.138 [2024-07-14 21:07:44.610818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.397 [2024-07-14 21:07:44.762936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.333  Copying: 48/48 [kB] (average 46 MBps) 00:08:34.333 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:34.333 21:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:34.333 { 00:08:34.333 "subsystems": [ 00:08:34.333 { 00:08:34.333 "subsystem": "bdev", 00:08:34.333 "config": [ 00:08:34.333 { 00:08:34.333 "params": { 00:08:34.333 "trtype": "pcie", 00:08:34.333 "traddr": "0000:00:10.0", 00:08:34.333 "name": "Nvme0" 00:08:34.333 }, 00:08:34.333 "method": "bdev_nvme_attach_controller" 00:08:34.333 }, 00:08:34.333 { 00:08:34.333 "method": "bdev_wait_for_examine" 00:08:34.333 } 00:08:34.333 ] 00:08:34.333 } 00:08:34.333 ] 00:08:34.333 } 00:08:34.333 [2024-07-14 21:07:45.812863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:34.333 [2024-07-14 21:07:45.813040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64481 ] 00:08:34.591 [2024-07-14 21:07:45.983643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.591 [2024-07-14 21:07:46.139715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.850 [2024-07-14 21:07:46.285949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.043  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:36.043 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:36.043 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.610 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:36.610 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:36.610 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:36.610 21:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.610 { 00:08:36.610 "subsystems": [ 00:08:36.610 { 00:08:36.610 "subsystem": "bdev", 00:08:36.610 "config": [ 00:08:36.610 { 00:08:36.610 "params": { 00:08:36.610 "trtype": "pcie", 00:08:36.610 "traddr": "0000:00:10.0", 00:08:36.610 "name": "Nvme0" 00:08:36.610 }, 00:08:36.610 "method": "bdev_nvme_attach_controller" 00:08:36.610 }, 00:08:36.610 { 00:08:36.610 "method": "bdev_wait_for_examine" 00:08:36.610 } 00:08:36.610 ] 00:08:36.610 } 00:08:36.610 ] 00:08:36.610 } 00:08:36.610 [2024-07-14 21:07:47.975506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:36.610 [2024-07-14 21:07:47.975674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64512 ] 00:08:36.610 [2024-07-14 21:07:48.142449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.869 [2024-07-14 21:07:48.292507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.127 [2024-07-14 21:07:48.436642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:38.062  Copying: 48/48 [kB] (average 46 MBps) 00:08:38.062 00:08:38.062 21:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:38.062 21:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:38.062 21:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:38.062 21:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:38.062 { 00:08:38.062 "subsystems": [ 00:08:38.062 { 00:08:38.062 "subsystem": "bdev", 00:08:38.062 "config": [ 00:08:38.062 { 00:08:38.062 "params": { 00:08:38.062 "trtype": "pcie", 00:08:38.062 "traddr": "0000:00:10.0", 00:08:38.062 "name": "Nvme0" 00:08:38.062 }, 00:08:38.062 "method": "bdev_nvme_attach_controller" 00:08:38.062 }, 00:08:38.062 { 00:08:38.062 "method": "bdev_wait_for_examine" 00:08:38.062 } 00:08:38.062 ] 00:08:38.062 } 00:08:38.062 ] 00:08:38.062 } 00:08:38.062 [2024-07-14 21:07:49.531186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.062 [2024-07-14 21:07:49.531347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64532 ] 00:08:38.320 [2024-07-14 21:07:49.699611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.320 [2024-07-14 21:07:49.868192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.578 [2024-07-14 21:07:50.014970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.772  Copying: 48/48 [kB] (average 46 MBps) 00:08:39.772 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:39.772 21:07:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.772 { 00:08:39.772 "subsystems": [ 00:08:39.772 { 00:08:39.772 "subsystem": "bdev", 00:08:39.772 "config": [ 00:08:39.772 { 00:08:39.772 "params": { 00:08:39.772 "trtype": "pcie", 00:08:39.772 "traddr": "0000:00:10.0", 00:08:39.772 "name": "Nvme0" 00:08:39.772 }, 00:08:39.772 "method": "bdev_nvme_attach_controller" 00:08:39.772 }, 00:08:39.772 { 00:08:39.772 "method": "bdev_wait_for_examine" 00:08:39.772 } 00:08:39.772 ] 00:08:39.772 } 00:08:39.772 ] 00:08:39.772 } 00:08:39.772 [2024-07-14 21:07:51.280212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:39.772 [2024-07-14 21:07:51.280643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64565 ] 00:08:40.032 [2024-07-14 21:07:51.450804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.291 [2024-07-14 21:07:51.598948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.291 [2024-07-14 21:07:51.748950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.117  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:41.117 00:08:41.376 00:08:41.376 real 0m32.559s 00:08:41.376 user 0m27.699s 00:08:41.376 sys 0m13.270s 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.376 ************************************ 00:08:41.376 END TEST dd_rw 00:08:41.376 ************************************ 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:41.376 ************************************ 00:08:41.376 START TEST dd_rw_offset 00:08:41.376 ************************************ 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:41.376 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:41.377 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=8c6hzs2skrjbj10ml8ikbam4iscfu50o3xgmtvwtmqzgqax1ilr53uymtedl6k23eakh6h6k3v3axgzpzyzbtohvt8hbyit6oakz8x28xpf4t6xbo8nbvl4mjnn1j7dbuei7951tgwygyy45ehp6120wb30pmg3zlioblyk5s0lzc7gprhp6lzfzzv1o22k6bexd0f7j3nakg8w7fs3vaxyrpj04kyfbe7hp61vlicn3odhuits4e1urn8xyjkoqqt1twokpml5ps8owb2myw22p3uhv1artebs1l3rrbpvdgmbtovof1k5bsfcy6r3vf3y5xinmy9bdtxnp8pq4gp2016jffay0xbt42mzkke3bst4wko8bqwldb7h2453sonpxylcfr17eniin6w5zm5m4ka55gca0o40f3mozxs0db8la8ab87cny3g5sijns0jj1n1q2zhlpkq6rqtlx6u5okenrvnd9u0ms0vyoahdstyp6r86dqfvi0749mcyjcb2t3fmuxo3bgypfgpv58m9wl1k2mcper8i9z4rse0d3zrzf04i2ivfat7vizemxjqn4ajqxolz3w70jv1hmtayipcq6tz1yhixb110mzcphadd30lpucgm7jv964vl9xc7tpxpm94yqxghk68yfn0oeunvtaiy8gscvx4amq9biige8ma21zocbs2oheixlh06l6s55adjfzeyw4z1xfi81talhqo8e4y19jn1ljpx3r4jzmyus303dlosfveuncvaqsqnh58s3539igkxckazbr4ak2o4x43w1fn0e7yiluhsdhj8aloindztemk8t99y6u4jf6t7x7r48cyr0ruaexaa8yhnr8v0v0892kmtdwgx46xrr6g7peyuudhpr9fj8arhhaznty1pgiwn3mgiab6awcb1pfq2h1nhyc1q7qmd779t37ta2nreyvlk2ncksxoxjqr9kcd6u2ehpx95rtm2lhv7r2mz4ft4jsmnmom2j28ld8wm743a8cqs0xfy0h1ids0ie1c8j0f6v0jjejk87b50mr32b5ds5uv1sq13hxaen2gwn91e2ely2r2n09as018ngpopewtmjkbryasketj16bnfj2x5stabkvozyy0c2dkamwi42hd388x94z2p6u8hkpcwxd9v9z0nf97scuo0iubh1ymcsxmx92jwzz5dia7b2a28g0vqy8f6kpvercledkind3gsx4jxbdox828hy3r1cc0trdyimq0v3r7gtht5khpm6pesvuquvzotgcmwc4lo676tvo0b0z0it7ob6m7mxmg4nks44hgkjvbi1bey7stmzbj0stfljlpbe0jfcxb895xvcdvhc7c8er1qyq18f4i3b1rq3tydueqqnmy45iv8srgw3j65rhlf7fzlpx1pstyvob6b0fdpqc351v4y77g4wkz4tycmar7wxsh69e59ak0ww9eja0e48dei0m82lmsjqsvf8iuj0vva7ozq4vshveeriv29cpo2epq6rkbzr9a8yd18eoynncoriqcbyifx1vhnnp7ritzwsnw0dbhv9hneup8tqau5jsxa4ustsdo3s9zk9th1jc5vd1dz1egr3l7d8h2s7727bddddumkclf9p8uxdernfkrnqopx4uq9felnc7gvo3uxejpuptw66m7kkro8d0orh9prpzctm0fnq5whfmpz77zjxja3xu143in1j96g1cm86f3dsvdng0tu8pwa90fboicia98p7g68agbohgbvd04cqzvyvuqsgbjwoe5b5oc6v2l3k3fwv0fryek2h3q6rbxg3rkmlulu8fqau0m703jiaazofgc9jkxodwkijtfxz7l8mul2qnwh5906q7f91syrtufhpt1svjlomiejxwk1spd847kwojyj7kdbh2kxek2ynkh3uvlwcezaj5aqjlxiw6dx15bu5fp8obiyx8wp0i0px9iwtk5vpfucriv29aso4j7zm0lfx8vzbiag6h1f27fz8hq9vs9fcz7t2xpuar1aj9lkwkx8zedynhc3zwf54n1m5ul49jtowp73dh5fizfuq6roieg3plq8dtpczfcayh8mx3wmhchawi105hk1mhbuahsc2ckjuvgikyuwpgnlb6ruq44f8t1thkjhpqimoym287lucx5p4lg4ih86rfoqr0i7ipems8mg95s3bnv2ws53vi8uutwrf2y0b5p3wb0qpjwwp2irgfqsptvg91czzrhhuk3tdnfehrgcd93lsfejvi63n9orz04a4tylgn69pj0u6knsgfx7yx6bst50ww0055cz8711jbcavweofxrff99elxuzn9u8gbmxzhs37rauqvomk6ni8cnw0r57apaluwn2ea56ltxjc444hqiu9a35o2y273iaznha05tervtgktb5ur7fr8ddprr2y64fh1q5h5fees2fd8qwoeyebqi5vprh0hwh8jcglq0xzwzsk9kilfpvuoibbjvgrcl1wxfsxrtoquahcrbnajb3wxg2cizfroqti1hs6v83jw2p5tnt4g90sh5akg5cc5lhwa8zz20kcfyalaqn67u5wuhnit9z80qk8pxawqclo9vkrmde9sddd1x5vzd7anfww1tgp0t0hwo9mtqp9kleogek91fee5ce7b2o2cikq2yjsw55honvpc7xbnm7g8ylgfb5gqq02h9xlib20j4oses5nccpgwvhuy8tuby6ugst224m5f2ch5dqziym3aaxvm33h1tmwinlvqbl48ps1130ev1zyi0negugde2lgwfn4ekdk8qnchqeucy6nmnhko3pzkwy3cvhppjprxk7eflkn5sex4w4gh08viwmqzz0rd0xlax0zl68vcd4pbukpa9akn2u4v1awa7vrnhnjdnlzx2lue9hw7wz41it6oo1bxepup1t9cgadpt4vtboelnz1zzy2s7kyyt5i7cue14710ynthwv2324fz4ozzaumg50x4x2djmcsnr7rfwjihijaa1h1lous5kvfq9khju4vc5eehnt9qeixqy3apzfnds2mf169groeks8f9yzt8c0j43lf1bbd6hp9kzpqzuo04qcfw2xywlwf47sxedsn33ytdh6r26dae5g1oqvddlskchz5ax76ytpck2icaee4nm5b4y714glytlom0dnwoqj4m6iaxq3ga9ma3j9vxm6y14h4krn4m9a8pp8r3o5k9rpf3wfuvja0wk8md9bbu6iesn8sd86rxnoxferzuiyi7pgsj7k529slqtkbn9kwg5f85xx007fhlm8wrmvw3twg7izav15axc4vhi18m8zpefyvd905ix58ynn12l6qf9bxzdcabhmpt7tk3b0oe2bn1ey71zahgy2cl6g6473qnk86lwss2fbdw1ssy7xin03er2h0ozetrp8ojzwpr0z1xutbwzeevl7vc4omuoolqbtqt3974h9cony8utzpwes4l5g29onuxb2o5uz4ftnqchzxhernwwu6ef4jqnu13i5s5u603yb5vw3k5x33xfeimhc1fu7s2w4vcfltz8eg8dmxwvw7fh6i9obh2gkbx5deeamdvhsi2rpnkudlsdftyhwxjdm39fmaiqk9f1saazhj46nuo3kpugb90czjrcr2f2j9df753v93g8lvu37x4lnsuuq06hxf93k5uuk0q7ado9h5481ucwch0yi1od961i8cgalozvlvqcs6o1jlxcvoo2eal90nzlwi8u7ry2g9snmi2ulm80zdg10mm4a5dnmi09yz27b4bvqtmqpvi1v20qpw1chuhxwlutx0mbjqjsg9l3mgx2aqvwobdln10bpv5wyij4j9pwa364jtftumowswarutx7d1ywfu8c9nsjmjomm517zggaeevoj1e056qlcqxq54m5dfubjqgojwbexvfw6rctoorgykv6s6u6jkdn0rip66qngst2gf9snchw69zmshzyzpnq1ukrgsel4c4bgb3reyof8vmy1ge7i3lfpw7wcn5ovcvrznjduu3bjk1eai3ptwl0jquq2eie32da4m 00:08:41.377 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:41.377 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:41.377 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:41.377 21:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:41.377 { 00:08:41.377 "subsystems": [ 00:08:41.377 { 00:08:41.377 "subsystem": "bdev", 00:08:41.377 "config": [ 00:08:41.377 { 00:08:41.377 "params": { 00:08:41.377 "trtype": "pcie", 00:08:41.377 "traddr": "0000:00:10.0", 00:08:41.377 "name": "Nvme0" 00:08:41.377 }, 00:08:41.377 "method": "bdev_nvme_attach_controller" 00:08:41.377 }, 00:08:41.377 { 00:08:41.377 "method": "bdev_wait_for_examine" 00:08:41.377 } 00:08:41.377 ] 00:08:41.377 } 00:08:41.377 ] 00:08:41.377 } 00:08:41.377 [2024-07-14 21:07:52.865517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.377 [2024-07-14 21:07:52.865925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64608 ] 00:08:41.636 [2024-07-14 21:07:53.021463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.636 [2024-07-14 21:07:53.167256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.903 [2024-07-14 21:07:53.314934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.095  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:43.095 00:08:43.095 21:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:43.095 21:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:43.095 21:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:43.095 21:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:43.095 { 00:08:43.095 "subsystems": [ 00:08:43.095 { 00:08:43.095 "subsystem": "bdev", 00:08:43.095 "config": [ 00:08:43.095 { 00:08:43.095 "params": { 00:08:43.095 "trtype": "pcie", 00:08:43.095 "traddr": "0000:00:10.0", 00:08:43.095 "name": "Nvme0" 00:08:43.095 }, 00:08:43.095 "method": "bdev_nvme_attach_controller" 00:08:43.095 }, 00:08:43.096 { 00:08:43.096 "method": "bdev_wait_for_examine" 00:08:43.096 } 00:08:43.096 ] 00:08:43.096 } 00:08:43.096 ] 00:08:43.096 } 00:08:43.096 [2024-07-14 21:07:54.515609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:43.096 [2024-07-14 21:07:54.515827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64633 ] 00:08:43.354 [2024-07-14 21:07:54.669038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.354 [2024-07-14 21:07:54.826875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.618 [2024-07-14 21:07:54.993339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.567  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:44.567 00:08:44.567 21:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:44.568 21:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 8c6hzs2skrjbj10ml8ikbam4iscfu50o3xgmtvwtmqzgqax1ilr53uymtedl6k23eakh6h6k3v3axgzpzyzbtohvt8hbyit6oakz8x28xpf4t6xbo8nbvl4mjnn1j7dbuei7951tgwygyy45ehp6120wb30pmg3zlioblyk5s0lzc7gprhp6lzfzzv1o22k6bexd0f7j3nakg8w7fs3vaxyrpj04kyfbe7hp61vlicn3odhuits4e1urn8xyjkoqqt1twokpml5ps8owb2myw22p3uhv1artebs1l3rrbpvdgmbtovof1k5bsfcy6r3vf3y5xinmy9bdtxnp8pq4gp2016jffay0xbt42mzkke3bst4wko8bqwldb7h2453sonpxylcfr17eniin6w5zm5m4ka55gca0o40f3mozxs0db8la8ab87cny3g5sijns0jj1n1q2zhlpkq6rqtlx6u5okenrvnd9u0ms0vyoahdstyp6r86dqfvi0749mcyjcb2t3fmuxo3bgypfgpv58m9wl1k2mcper8i9z4rse0d3zrzf04i2ivfat7vizemxjqn4ajqxolz3w70jv1hmtayipcq6tz1yhixb110mzcphadd30lpucgm7jv964vl9xc7tpxpm94yqxghk68yfn0oeunvtaiy8gscvx4amq9biige8ma21zocbs2oheixlh06l6s55adjfzeyw4z1xfi81talhqo8e4y19jn1ljpx3r4jzmyus303dlosfveuncvaqsqnh58s3539igkxckazbr4ak2o4x43w1fn0e7yiluhsdhj8aloindztemk8t99y6u4jf6t7x7r48cyr0ruaexaa8yhnr8v0v0892kmtdwgx46xrr6g7peyuudhpr9fj8arhhaznty1pgiwn3mgiab6awcb1pfq2h1nhyc1q7qmd779t37ta2nreyvlk2ncksxoxjqr9kcd6u2ehpx95rtm2lhv7r2mz4ft4jsmnmom2j28ld8wm743a8cqs0xfy0h1ids0ie1c8j0f6v0jjejk87b50mr32b5ds5uv1sq13hxaen2gwn91e2ely2r2n09as018ngpopewtmjkbryasketj16bnfj2x5stabkvozyy0c2dkamwi42hd388x94z2p6u8hkpcwxd9v9z0nf97scuo0iubh1ymcsxmx92jwzz5dia7b2a28g0vqy8f6kpvercledkind3gsx4jxbdox828hy3r1cc0trdyimq0v3r7gtht5khpm6pesvuquvzotgcmwc4lo676tvo0b0z0it7ob6m7mxmg4nks44hgkjvbi1bey7stmzbj0stfljlpbe0jfcxb895xvcdvhc7c8er1qyq18f4i3b1rq3tydueqqnmy45iv8srgw3j65rhlf7fzlpx1pstyvob6b0fdpqc351v4y77g4wkz4tycmar7wxsh69e59ak0ww9eja0e48dei0m82lmsjqsvf8iuj0vva7ozq4vshveeriv29cpo2epq6rkbzr9a8yd18eoynncoriqcbyifx1vhnnp7ritzwsnw0dbhv9hneup8tqau5jsxa4ustsdo3s9zk9th1jc5vd1dz1egr3l7d8h2s7727bddddumkclf9p8uxdernfkrnqopx4uq9felnc7gvo3uxejpuptw66m7kkro8d0orh9prpzctm0fnq5whfmpz77zjxja3xu143in1j96g1cm86f3dsvdng0tu8pwa90fboicia98p7g68agbohgbvd04cqzvyvuqsgbjwoe5b5oc6v2l3k3fwv0fryek2h3q6rbxg3rkmlulu8fqau0m703jiaazofgc9jkxodwkijtfxz7l8mul2qnwh5906q7f91syrtufhpt1svjlomiejxwk1spd847kwojyj7kdbh2kxek2ynkh3uvlwcezaj5aqjlxiw6dx15bu5fp8obiyx8wp0i0px9iwtk5vpfucriv29aso4j7zm0lfx8vzbiag6h1f27fz8hq9vs9fcz7t2xpuar1aj9lkwkx8zedynhc3zwf54n1m5ul49jtowp73dh5fizfuq6roieg3plq8dtpczfcayh8mx3wmhchawi105hk1mhbuahsc2ckjuvgikyuwpgnlb6ruq44f8t1thkjhpqimoym287lucx5p4lg4ih86rfoqr0i7ipems8mg95s3bnv2ws53vi8uutwrf2y0b5p3wb0qpjwwp2irgfqsptvg91czzrhhuk3tdnfehrgcd93lsfejvi63n9orz04a4tylgn69pj0u6knsgfx7yx6bst50ww0055cz8711jbcavweofxrff99elxuzn9u8gbmxzhs37rauqvomk6ni8cnw0r57apaluwn2ea56ltxjc444hqiu9a35o2y273iaznha05tervtgktb5ur7fr8ddprr2y64fh1q5h5fees2fd8qwoeyebqi5vprh0hwh8jcglq0xzwzsk9kilfpvuoibbjvgrcl1wxfsxrtoquahcrbnajb3wxg2cizfroqti1hs6v83jw2p5tnt4g90sh5akg5cc5lhwa8zz20kcfyalaqn67u5wuhnit9z80qk8pxawqclo9vkrmde9sddd1x5vzd7anfww1tgp0t0hwo9mtqp9kleogek91fee5ce7b2o2cikq2yjsw55honvpc7xbnm7g8ylgfb5gqq02h9xlib20j4oses5nccpgwvhuy8tuby6ugst224m5f2ch5dqziym3aaxvm33h1tmwinlvqbl48ps1130ev1zyi0negugde2lgwfn4ekdk8qnchqeucy6nmnhko3pzkwy3cvhppjprxk7eflkn5sex4w4gh08viwmqzz0rd0xlax0zl68vcd4pbukpa9akn2u4v1awa7vrnhnjdnlzx2lue9hw7wz41it6oo1bxepup1t9cgadpt4vtboelnz1zzy2s7kyyt5i7cue14710ynthwv2324fz4ozzaumg50x4x2djmcsnr7rfwjihijaa1h1lous5kvfq9khju4vc5eehnt9qeixqy3apzfnds2mf169groeks8f9yzt8c0j43lf1bbd6hp9kzpqzuo04qcfw2xywlwf47sxedsn33ytdh6r26dae5g1oqvddlskchz5ax76ytpck2icaee4nm5b4y714glytlom0dnwoqj4m6iaxq3ga9ma3j9vxm6y14h4krn4m9a8pp8r3o5k9rpf3wfuvja0wk8md9bbu6iesn8sd86rxnoxferzuiyi7pgsj7k529slqtkbn9kwg5f85xx007fhlm8wrmvw3twg7izav15axc4vhi18m8zpefyvd905ix58ynn12l6qf9bxzdcabhmpt7tk3b0oe2bn1ey71zahgy2cl6g6473qnk86lwss2fbdw1ssy7xin03er2h0ozetrp8ojzwpr0z1xutbwzeevl7vc4omuoolqbtqt3974h9cony8utzpwes4l5g29onuxb2o5uz4ftnqchzxhernwwu6ef4jqnu13i5s5u603yb5vw3k5x33xfeimhc1fu7s2w4vcfltz8eg8dmxwvw7fh6i9obh2gkbx5deeamdvhsi2rpnkudlsdftyhwxjdm39fmaiqk9f1saazhj46nuo3kpugb90czjrcr2f2j9df753v93g8lvu37x4lnsuuq06hxf93k5uuk0q7ado9h5481ucwch0yi1od961i8cgalozvlvqcs6o1jlxcvoo2eal90nzlwi8u7ry2g9snmi2ulm80zdg10mm4a5dnmi09yz27b4bvqtmqpvi1v20qpw1chuhxwlutx0mbjqjsg9l3mgx2aqvwobdln10bpv5wyij4j9pwa364jtftumowswarutx7d1ywfu8c9nsjmjomm517zggaeevoj1e056qlcqxq54m5dfubjqgojwbexvfw6rctoorgykv6s6u6jkdn0rip66qngst2gf9snchw69zmshzyzpnq1ukrgsel4c4bgb3reyof8vmy1ge7i3lfpw7wcn5ovcvrznjduu3bjk1eai3ptwl0jquq2eie32da4m == \8\c\6\h\z\s\2\s\k\r\j\b\j\1\0\m\l\8\i\k\b\a\m\4\i\s\c\f\u\5\0\o\3\x\g\m\t\v\w\t\m\q\z\g\q\a\x\1\i\l\r\5\3\u\y\m\t\e\d\l\6\k\2\3\e\a\k\h\6\h\6\k\3\v\3\a\x\g\z\p\z\y\z\b\t\o\h\v\t\8\h\b\y\i\t\6\o\a\k\z\8\x\2\8\x\p\f\4\t\6\x\b\o\8\n\b\v\l\4\m\j\n\n\1\j\7\d\b\u\e\i\7\9\5\1\t\g\w\y\g\y\y\4\5\e\h\p\6\1\2\0\w\b\3\0\p\m\g\3\z\l\i\o\b\l\y\k\5\s\0\l\z\c\7\g\p\r\h\p\6\l\z\f\z\z\v\1\o\2\2\k\6\b\e\x\d\0\f\7\j\3\n\a\k\g\8\w\7\f\s\3\v\a\x\y\r\p\j\0\4\k\y\f\b\e\7\h\p\6\1\v\l\i\c\n\3\o\d\h\u\i\t\s\4\e\1\u\r\n\8\x\y\j\k\o\q\q\t\1\t\w\o\k\p\m\l\5\p\s\8\o\w\b\2\m\y\w\2\2\p\3\u\h\v\1\a\r\t\e\b\s\1\l\3\r\r\b\p\v\d\g\m\b\t\o\v\o\f\1\k\5\b\s\f\c\y\6\r\3\v\f\3\y\5\x\i\n\m\y\9\b\d\t\x\n\p\8\p\q\4\g\p\2\0\1\6\j\f\f\a\y\0\x\b\t\4\2\m\z\k\k\e\3\b\s\t\4\w\k\o\8\b\q\w\l\d\b\7\h\2\4\5\3\s\o\n\p\x\y\l\c\f\r\1\7\e\n\i\i\n\6\w\5\z\m\5\m\4\k\a\5\5\g\c\a\0\o\4\0\f\3\m\o\z\x\s\0\d\b\8\l\a\8\a\b\8\7\c\n\y\3\g\5\s\i\j\n\s\0\j\j\1\n\1\q\2\z\h\l\p\k\q\6\r\q\t\l\x\6\u\5\o\k\e\n\r\v\n\d\9\u\0\m\s\0\v\y\o\a\h\d\s\t\y\p\6\r\8\6\d\q\f\v\i\0\7\4\9\m\c\y\j\c\b\2\t\3\f\m\u\x\o\3\b\g\y\p\f\g\p\v\5\8\m\9\w\l\1\k\2\m\c\p\e\r\8\i\9\z\4\r\s\e\0\d\3\z\r\z\f\0\4\i\2\i\v\f\a\t\7\v\i\z\e\m\x\j\q\n\4\a\j\q\x\o\l\z\3\w\7\0\j\v\1\h\m\t\a\y\i\p\c\q\6\t\z\1\y\h\i\x\b\1\1\0\m\z\c\p\h\a\d\d\3\0\l\p\u\c\g\m\7\j\v\9\6\4\v\l\9\x\c\7\t\p\x\p\m\9\4\y\q\x\g\h\k\6\8\y\f\n\0\o\e\u\n\v\t\a\i\y\8\g\s\c\v\x\4\a\m\q\9\b\i\i\g\e\8\m\a\2\1\z\o\c\b\s\2\o\h\e\i\x\l\h\0\6\l\6\s\5\5\a\d\j\f\z\e\y\w\4\z\1\x\f\i\8\1\t\a\l\h\q\o\8\e\4\y\1\9\j\n\1\l\j\p\x\3\r\4\j\z\m\y\u\s\3\0\3\d\l\o\s\f\v\e\u\n\c\v\a\q\s\q\n\h\5\8\s\3\5\3\9\i\g\k\x\c\k\a\z\b\r\4\a\k\2\o\4\x\4\3\w\1\f\n\0\e\7\y\i\l\u\h\s\d\h\j\8\a\l\o\i\n\d\z\t\e\m\k\8\t\9\9\y\6\u\4\j\f\6\t\7\x\7\r\4\8\c\y\r\0\r\u\a\e\x\a\a\8\y\h\n\r\8\v\0\v\0\8\9\2\k\m\t\d\w\g\x\4\6\x\r\r\6\g\7\p\e\y\u\u\d\h\p\r\9\f\j\8\a\r\h\h\a\z\n\t\y\1\p\g\i\w\n\3\m\g\i\a\b\6\a\w\c\b\1\p\f\q\2\h\1\n\h\y\c\1\q\7\q\m\d\7\7\9\t\3\7\t\a\2\n\r\e\y\v\l\k\2\n\c\k\s\x\o\x\j\q\r\9\k\c\d\6\u\2\e\h\p\x\9\5\r\t\m\2\l\h\v\7\r\2\m\z\4\f\t\4\j\s\m\n\m\o\m\2\j\2\8\l\d\8\w\m\7\4\3\a\8\c\q\s\0\x\f\y\0\h\1\i\d\s\0\i\e\1\c\8\j\0\f\6\v\0\j\j\e\j\k\8\7\b\5\0\m\r\3\2\b\5\d\s\5\u\v\1\s\q\1\3\h\x\a\e\n\2\g\w\n\9\1\e\2\e\l\y\2\r\2\n\0\9\a\s\0\1\8\n\g\p\o\p\e\w\t\m\j\k\b\r\y\a\s\k\e\t\j\1\6\b\n\f\j\2\x\5\s\t\a\b\k\v\o\z\y\y\0\c\2\d\k\a\m\w\i\4\2\h\d\3\8\8\x\9\4\z\2\p\6\u\8\h\k\p\c\w\x\d\9\v\9\z\0\n\f\9\7\s\c\u\o\0\i\u\b\h\1\y\m\c\s\x\m\x\9\2\j\w\z\z\5\d\i\a\7\b\2\a\2\8\g\0\v\q\y\8\f\6\k\p\v\e\r\c\l\e\d\k\i\n\d\3\g\s\x\4\j\x\b\d\o\x\8\2\8\h\y\3\r\1\c\c\0\t\r\d\y\i\m\q\0\v\3\r\7\g\t\h\t\5\k\h\p\m\6\p\e\s\v\u\q\u\v\z\o\t\g\c\m\w\c\4\l\o\6\7\6\t\v\o\0\b\0\z\0\i\t\7\o\b\6\m\7\m\x\m\g\4\n\k\s\4\4\h\g\k\j\v\b\i\1\b\e\y\7\s\t\m\z\b\j\0\s\t\f\l\j\l\p\b\e\0\j\f\c\x\b\8\9\5\x\v\c\d\v\h\c\7\c\8\e\r\1\q\y\q\1\8\f\4\i\3\b\1\r\q\3\t\y\d\u\e\q\q\n\m\y\4\5\i\v\8\s\r\g\w\3\j\6\5\r\h\l\f\7\f\z\l\p\x\1\p\s\t\y\v\o\b\6\b\0\f\d\p\q\c\3\5\1\v\4\y\7\7\g\4\w\k\z\4\t\y\c\m\a\r\7\w\x\s\h\6\9\e\5\9\a\k\0\w\w\9\e\j\a\0\e\4\8\d\e\i\0\m\8\2\l\m\s\j\q\s\v\f\8\i\u\j\0\v\v\a\7\o\z\q\4\v\s\h\v\e\e\r\i\v\2\9\c\p\o\2\e\p\q\6\r\k\b\z\r\9\a\8\y\d\1\8\e\o\y\n\n\c\o\r\i\q\c\b\y\i\f\x\1\v\h\n\n\p\7\r\i\t\z\w\s\n\w\0\d\b\h\v\9\h\n\e\u\p\8\t\q\a\u\5\j\s\x\a\4\u\s\t\s\d\o\3\s\9\z\k\9\t\h\1\j\c\5\v\d\1\d\z\1\e\g\r\3\l\7\d\8\h\2\s\7\7\2\7\b\d\d\d\d\u\m\k\c\l\f\9\p\8\u\x\d\e\r\n\f\k\r\n\q\o\p\x\4\u\q\9\f\e\l\n\c\7\g\v\o\3\u\x\e\j\p\u\p\t\w\6\6\m\7\k\k\r\o\8\d\0\o\r\h\9\p\r\p\z\c\t\m\0\f\n\q\5\w\h\f\m\p\z\7\7\z\j\x\j\a\3\x\u\1\4\3\i\n\1\j\9\6\g\1\c\m\8\6\f\3\d\s\v\d\n\g\0\t\u\8\p\w\a\9\0\f\b\o\i\c\i\a\9\8\p\7\g\6\8\a\g\b\o\h\g\b\v\d\0\4\c\q\z\v\y\v\u\q\s\g\b\j\w\o\e\5\b\5\o\c\6\v\2\l\3\k\3\f\w\v\0\f\r\y\e\k\2\h\3\q\6\r\b\x\g\3\r\k\m\l\u\l\u\8\f\q\a\u\0\m\7\0\3\j\i\a\a\z\o\f\g\c\9\j\k\x\o\d\w\k\i\j\t\f\x\z\7\l\8\m\u\l\2\q\n\w\h\5\9\0\6\q\7\f\9\1\s\y\r\t\u\f\h\p\t\1\s\v\j\l\o\m\i\e\j\x\w\k\1\s\p\d\8\4\7\k\w\o\j\y\j\7\k\d\b\h\2\k\x\e\k\2\y\n\k\h\3\u\v\l\w\c\e\z\a\j\5\a\q\j\l\x\i\w\6\d\x\1\5\b\u\5\f\p\8\o\b\i\y\x\8\w\p\0\i\0\p\x\9\i\w\t\k\5\v\p\f\u\c\r\i\v\2\9\a\s\o\4\j\7\z\m\0\l\f\x\8\v\z\b\i\a\g\6\h\1\f\2\7\f\z\8\h\q\9\v\s\9\f\c\z\7\t\2\x\p\u\a\r\1\a\j\9\l\k\w\k\x\8\z\e\d\y\n\h\c\3\z\w\f\5\4\n\1\m\5\u\l\4\9\j\t\o\w\p\7\3\d\h\5\f\i\z\f\u\q\6\r\o\i\e\g\3\p\l\q\8\d\t\p\c\z\f\c\a\y\h\8\m\x\3\w\m\h\c\h\a\w\i\1\0\5\h\k\1\m\h\b\u\a\h\s\c\2\c\k\j\u\v\g\i\k\y\u\w\p\g\n\l\b\6\r\u\q\4\4\f\8\t\1\t\h\k\j\h\p\q\i\m\o\y\m\2\8\7\l\u\c\x\5\p\4\l\g\4\i\h\8\6\r\f\o\q\r\0\i\7\i\p\e\m\s\8\m\g\9\5\s\3\b\n\v\2\w\s\5\3\v\i\8\u\u\t\w\r\f\2\y\0\b\5\p\3\w\b\0\q\p\j\w\w\p\2\i\r\g\f\q\s\p\t\v\g\9\1\c\z\z\r\h\h\u\k\3\t\d\n\f\e\h\r\g\c\d\9\3\l\s\f\e\j\v\i\6\3\n\9\o\r\z\0\4\a\4\t\y\l\g\n\6\9\p\j\0\u\6\k\n\s\g\f\x\7\y\x\6\b\s\t\5\0\w\w\0\0\5\5\c\z\8\7\1\1\j\b\c\a\v\w\e\o\f\x\r\f\f\9\9\e\l\x\u\z\n\9\u\8\g\b\m\x\z\h\s\3\7\r\a\u\q\v\o\m\k\6\n\i\8\c\n\w\0\r\5\7\a\p\a\l\u\w\n\2\e\a\5\6\l\t\x\j\c\4\4\4\h\q\i\u\9\a\3\5\o\2\y\2\7\3\i\a\z\n\h\a\0\5\t\e\r\v\t\g\k\t\b\5\u\r\7\f\r\8\d\d\p\r\r\2\y\6\4\f\h\1\q\5\h\5\f\e\e\s\2\f\d\8\q\w\o\e\y\e\b\q\i\5\v\p\r\h\0\h\w\h\8\j\c\g\l\q\0\x\z\w\z\s\k\9\k\i\l\f\p\v\u\o\i\b\b\j\v\g\r\c\l\1\w\x\f\s\x\r\t\o\q\u\a\h\c\r\b\n\a\j\b\3\w\x\g\2\c\i\z\f\r\o\q\t\i\1\h\s\6\v\8\3\j\w\2\p\5\t\n\t\4\g\9\0\s\h\5\a\k\g\5\c\c\5\l\h\w\a\8\z\z\2\0\k\c\f\y\a\l\a\q\n\6\7\u\5\w\u\h\n\i\t\9\z\8\0\q\k\8\p\x\a\w\q\c\l\o\9\v\k\r\m\d\e\9\s\d\d\d\1\x\5\v\z\d\7\a\n\f\w\w\1\t\g\p\0\t\0\h\w\o\9\m\t\q\p\9\k\l\e\o\g\e\k\9\1\f\e\e\5\c\e\7\b\2\o\2\c\i\k\q\2\y\j\s\w\5\5\h\o\n\v\p\c\7\x\b\n\m\7\g\8\y\l\g\f\b\5\g\q\q\0\2\h\9\x\l\i\b\2\0\j\4\o\s\e\s\5\n\c\c\p\g\w\v\h\u\y\8\t\u\b\y\6\u\g\s\t\2\2\4\m\5\f\2\c\h\5\d\q\z\i\y\m\3\a\a\x\v\m\3\3\h\1\t\m\w\i\n\l\v\q\b\l\4\8\p\s\1\1\3\0\e\v\1\z\y\i\0\n\e\g\u\g\d\e\2\l\g\w\f\n\4\e\k\d\k\8\q\n\c\h\q\e\u\c\y\6\n\m\n\h\k\o\3\p\z\k\w\y\3\c\v\h\p\p\j\p\r\x\k\7\e\f\l\k\n\5\s\e\x\4\w\4\g\h\0\8\v\i\w\m\q\z\z\0\r\d\0\x\l\a\x\0\z\l\6\8\v\c\d\4\p\b\u\k\p\a\9\a\k\n\2\u\4\v\1\a\w\a\7\v\r\n\h\n\j\d\n\l\z\x\2\l\u\e\9\h\w\7\w\z\4\1\i\t\6\o\o\1\b\x\e\p\u\p\1\t\9\c\g\a\d\p\t\4\v\t\b\o\e\l\n\z\1\z\z\y\2\s\7\k\y\y\t\5\i\7\c\u\e\1\4\7\1\0\y\n\t\h\w\v\2\3\2\4\f\z\4\o\z\z\a\u\m\g\5\0\x\4\x\2\d\j\m\c\s\n\r\7\r\f\w\j\i\h\i\j\a\a\1\h\1\l\o\u\s\5\k\v\f\q\9\k\h\j\u\4\v\c\5\e\e\h\n\t\9\q\e\i\x\q\y\3\a\p\z\f\n\d\s\2\m\f\1\6\9\g\r\o\e\k\s\8\f\9\y\z\t\8\c\0\j\4\3\l\f\1\b\b\d\6\h\p\9\k\z\p\q\z\u\o\0\4\q\c\f\w\2\x\y\w\l\w\f\4\7\s\x\e\d\s\n\3\3\y\t\d\h\6\r\2\6\d\a\e\5\g\1\o\q\v\d\d\l\s\k\c\h\z\5\a\x\7\6\y\t\p\c\k\2\i\c\a\e\e\4\n\m\5\b\4\y\7\1\4\g\l\y\t\l\o\m\0\d\n\w\o\q\j\4\m\6\i\a\x\q\3\g\a\9\m\a\3\j\9\v\x\m\6\y\1\4\h\4\k\r\n\4\m\9\a\8\p\p\8\r\3\o\5\k\9\r\p\f\3\w\f\u\v\j\a\0\w\k\8\m\d\9\b\b\u\6\i\e\s\n\8\s\d\8\6\r\x\n\o\x\f\e\r\z\u\i\y\i\7\p\g\s\j\7\k\5\2\9\s\l\q\t\k\b\n\9\k\w\g\5\f\8\5\x\x\0\0\7\f\h\l\m\8\w\r\m\v\w\3\t\w\g\7\i\z\a\v\1\5\a\x\c\4\v\h\i\1\8\m\8\z\p\e\f\y\v\d\9\0\5\i\x\5\8\y\n\n\1\2\l\6\q\f\9\b\x\z\d\c\a\b\h\m\p\t\7\t\k\3\b\0\o\e\2\b\n\1\e\y\7\1\z\a\h\g\y\2\c\l\6\g\6\4\7\3\q\n\k\8\6\l\w\s\s\2\f\b\d\w\1\s\s\y\7\x\i\n\0\3\e\r\2\h\0\o\z\e\t\r\p\8\o\j\z\w\p\r\0\z\1\x\u\t\b\w\z\e\e\v\l\7\v\c\4\o\m\u\o\o\l\q\b\t\q\t\3\9\7\4\h\9\c\o\n\y\8\u\t\z\p\w\e\s\4\l\5\g\2\9\o\n\u\x\b\2\o\5\u\z\4\f\t\n\q\c\h\z\x\h\e\r\n\w\w\u\6\e\f\4\j\q\n\u\1\3\i\5\s\5\u\6\0\3\y\b\5\v\w\3\k\5\x\3\3\x\f\e\i\m\h\c\1\f\u\7\s\2\w\4\v\c\f\l\t\z\8\e\g\8\d\m\x\w\v\w\7\f\h\6\i\9\o\b\h\2\g\k\b\x\5\d\e\e\a\m\d\v\h\s\i\2\r\p\n\k\u\d\l\s\d\f\t\y\h\w\x\j\d\m\3\9\f\m\a\i\q\k\9\f\1\s\a\a\z\h\j\4\6\n\u\o\3\k\p\u\g\b\9\0\c\z\j\r\c\r\2\f\2\j\9\d\f\7\5\3\v\9\3\g\8\l\v\u\3\7\x\4\l\n\s\u\u\q\0\6\h\x\f\9\3\k\5\u\u\k\0\q\7\a\d\o\9\h\5\4\8\1\u\c\w\c\h\0\y\i\1\o\d\9\6\1\i\8\c\g\a\l\o\z\v\l\v\q\c\s\6\o\1\j\l\x\c\v\o\o\2\e\a\l\9\0\n\z\l\w\i\8\u\7\r\y\2\g\9\s\n\m\i\2\u\l\m\8\0\z\d\g\1\0\m\m\4\a\5\d\n\m\i\0\9\y\z\2\7\b\4\b\v\q\t\m\q\p\v\i\1\v\2\0\q\p\w\1\c\h\u\h\x\w\l\u\t\x\0\m\b\j\q\j\s\g\9\l\3\m\g\x\2\a\q\v\w\o\b\d\l\n\1\0\b\p\v\5\w\y\i\j\4\j\9\p\w\a\3\6\4\j\t\f\t\u\m\o\w\s\w\a\r\u\t\x\7\d\1\y\w\f\u\8\c\9\n\s\j\m\j\o\m\m\5\1\7\z\g\g\a\e\e\v\o\j\1\e\0\5\6\q\l\c\q\x\q\5\4\m\5\d\f\u\b\j\q\g\o\j\w\b\e\x\v\f\w\6\r\c\t\o\o\r\g\y\k\v\6\s\6\u\6\j\k\d\n\0\r\i\p\6\6\q\n\g\s\t\2\g\f\9\s\n\c\h\w\6\9\z\m\s\h\z\y\z\p\n\q\1\u\k\r\g\s\e\l\4\c\4\b\g\b\3\r\e\y\o\f\8\v\m\y\1\g\e\7\i\3\l\f\p\w\7\w\c\n\5\o\v\c\v\r\z\n\j\d\u\u\3\b\j\k\1\e\a\i\3\p\t\w\l\0\j\q\u\q\2\e\i\e\3\2\d\a\4\m ]] 00:08:44.568 00:08:44.568 real 0m3.261s 00:08:44.568 user 0m2.785s 00:08:44.568 sys 0m1.421s 00:08:44.568 21:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.568 ************************************ 00:08:44.568 END TEST dd_rw_offset 00:08:44.568 ************************************ 00:08:44.568 21:07:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:44.568 21:07:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:44.568 { 00:08:44.568 "subsystems": [ 00:08:44.568 { 00:08:44.568 "subsystem": "bdev", 00:08:44.568 "config": [ 00:08:44.568 { 00:08:44.568 "params": { 00:08:44.568 "trtype": "pcie", 00:08:44.568 "traddr": "0000:00:10.0", 00:08:44.568 "name": "Nvme0" 00:08:44.568 }, 00:08:44.568 "method": "bdev_nvme_attach_controller" 00:08:44.568 }, 00:08:44.568 { 00:08:44.568 "method": "bdev_wait_for_examine" 00:08:44.568 } 00:08:44.568 ] 00:08:44.568 } 00:08:44.568 ] 00:08:44.568 } 00:08:44.827 [2024-07-14 21:07:56.125417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:44.827 [2024-07-14 21:07:56.125573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64675 ] 00:08:44.827 [2024-07-14 21:07:56.288345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.087 [2024-07-14 21:07:56.460433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.087 [2024-07-14 21:07:56.631086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.283  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:46.283 00:08:46.283 21:07:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.283 ************************************ 00:08:46.283 END TEST spdk_dd_basic_rw 00:08:46.283 ************************************ 00:08:46.283 00:08:46.283 real 0m39.689s 00:08:46.283 user 0m33.481s 00:08:46.283 sys 0m15.970s 00:08:46.283 21:07:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.283 21:07:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:46.283 21:07:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:46.283 21:07:57 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:46.283 21:07:57 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:46.283 21:07:57 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.283 21:07:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:46.542 ************************************ 00:08:46.542 START TEST spdk_dd_posix 00:08:46.542 ************************************ 00:08:46.542 21:07:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:46.542 * Looking for test storage... 00:08:46.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:46.542 21:07:57 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.542 21:07:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.542 21:07:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.542 21:07:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.542 21:07:57 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:46.543 * First test run, liburing in use 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:46.543 ************************************ 00:08:46.543 START TEST dd_flag_append 00:08:46.543 ************************************ 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ytelgsyhhjuyrlqtwd7jfhs7b3t6q6rw 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=jinsirvdeaxtlma4gowmsukeof2hwja9 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ytelgsyhhjuyrlqtwd7jfhs7b3t6q6rw 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s jinsirvdeaxtlma4gowmsukeof2hwja9 00:08:46.543 21:07:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:46.543 [2024-07-14 21:07:58.022673] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:46.543 [2024-07-14 21:07:58.022827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64751 ] 00:08:46.802 [2024-07-14 21:07:58.176954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.802 [2024-07-14 21:07:58.346165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.061 [2024-07-14 21:07:58.498548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.437  Copying: 32/32 [B] (average 31 kBps) 00:08:48.437 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ jinsirvdeaxtlma4gowmsukeof2hwja9ytelgsyhhjuyrlqtwd7jfhs7b3t6q6rw == \j\i\n\s\i\r\v\d\e\a\x\t\l\m\a\4\g\o\w\m\s\u\k\e\o\f\2\h\w\j\a\9\y\t\e\l\g\s\y\h\h\j\u\y\r\l\q\t\w\d\7\j\f\h\s\7\b\3\t\6\q\6\r\w ]] 00:08:48.437 00:08:48.437 real 0m1.653s 00:08:48.437 user 0m1.362s 00:08:48.437 sys 0m0.800s 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.437 ************************************ 00:08:48.437 END TEST dd_flag_append 00:08:48.437 ************************************ 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:48.437 ************************************ 00:08:48.437 START TEST dd_flag_directory 00:08:48.437 ************************************ 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.437 21:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.437 [2024-07-14 21:07:59.741412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:48.437 [2024-07-14 21:07:59.741583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64791 ] 00:08:48.438 [2024-07-14 21:07:59.909658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.697 [2024-07-14 21:08:00.084519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.697 [2024-07-14 21:08:00.241841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.956 [2024-07-14 21:08:00.324525] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:48.956 [2024-07-14 21:08:00.324585] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:48.956 [2024-07-14 21:08:00.324624] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.524 [2024-07-14 21:08:00.927476] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:49.783 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.784 21:08:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:50.042 [2024-07-14 21:08:01.388503] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:50.042 [2024-07-14 21:08:01.388659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64813 ] 00:08:50.042 [2024-07-14 21:08:01.542788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.300 [2024-07-14 21:08:01.704713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.558 [2024-07-14 21:08:01.863613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.558 [2024-07-14 21:08:01.945102] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:50.558 [2024-07-14 21:08:01.945160] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:50.558 [2024-07-14 21:08:01.945217] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.125 [2024-07-14 21:08:02.546612] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:51.691 00:08:51.691 real 0m3.301s 00:08:51.691 user 0m2.715s 00:08:51.691 sys 0m0.366s 00:08:51.691 ************************************ 00:08:51.691 END TEST dd_flag_directory 00:08:51.691 ************************************ 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:51.691 ************************************ 00:08:51.691 START TEST dd_flag_nofollow 00:08:51.691 ************************************ 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.691 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:51.692 21:08:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.692 [2024-07-14 21:08:03.105620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:51.692 [2024-07-14 21:08:03.105934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64860 ] 00:08:51.948 [2024-07-14 21:08:03.282279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.948 [2024-07-14 21:08:03.449076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.205 [2024-07-14 21:08:03.599877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.205 [2024-07-14 21:08:03.690251] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:52.205 [2024-07-14 21:08:03.690308] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:52.205 [2024-07-14 21:08:03.690351] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.772 [2024-07-14 21:08:04.317218] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.340 21:08:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:53.340 [2024-07-14 21:08:04.863749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:53.340 [2024-07-14 21:08:04.863932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64881 ] 00:08:53.599 [2024-07-14 21:08:05.025562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.857 [2024-07-14 21:08:05.227178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.116 [2024-07-14 21:08:05.407771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.116 [2024-07-14 21:08:05.483587] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:54.116 [2024-07-14 21:08:05.483647] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:54.116 [2024-07-14 21:08:05.483688] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.684 [2024-07-14 21:08:06.080460] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:54.943 21:08:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.202 [2024-07-14 21:08:06.576403] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:55.202 [2024-07-14 21:08:06.576582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64901 ] 00:08:55.202 [2024-07-14 21:08:06.746536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.460 [2024-07-14 21:08:06.915274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.717 [2024-07-14 21:08:07.077462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.652  Copying: 512/512 [B] (average 500 kBps) 00:08:56.652 00:08:56.652 ************************************ 00:08:56.652 END TEST dd_flag_nofollow 00:08:56.652 ************************************ 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ s7kz67lb6o1nna2gkz8be4y2qqbufm4m29ciyeixwxz9vkjpq6gr0vbsbohg6j94oigxgyaa8a33sgnakfk2yu2qg305uh4y48yj6sffrkgg8il2jfrkgmxxupknanrl1wdodo7raor0aqs2q4d778saztbcrf8c2q9yued044lskfomli9npattzk9un74gwayf9bk4pt7xssjaxu55ud4a4p14lczbsxob1fwsev6bn0io5v56hbcuzuzmiefdrwkseb043xcr9ozai2o8a0awjg6d2cnov2zn40dd8cwz4erxqsrzoyienfwc1qpty3cz5077iw536vmgs24dceek9s50d4nu8mw9gxjq1nldqadpl16jh7ltzjxcdfbmiz97ra8rsa6f7xegfig7zpigk72fa95ka73gw704c2g77su6a9pgusagp3om3pexx5dvnjxumfhc35mgvw72um8zfbqyd3epcnvjyb0it91na52kbncqvkn2i6n2o7do == \s\7\k\z\6\7\l\b\6\o\1\n\n\a\2\g\k\z\8\b\e\4\y\2\q\q\b\u\f\m\4\m\2\9\c\i\y\e\i\x\w\x\z\9\v\k\j\p\q\6\g\r\0\v\b\s\b\o\h\g\6\j\9\4\o\i\g\x\g\y\a\a\8\a\3\3\s\g\n\a\k\f\k\2\y\u\2\q\g\3\0\5\u\h\4\y\4\8\y\j\6\s\f\f\r\k\g\g\8\i\l\2\j\f\r\k\g\m\x\x\u\p\k\n\a\n\r\l\1\w\d\o\d\o\7\r\a\o\r\0\a\q\s\2\q\4\d\7\7\8\s\a\z\t\b\c\r\f\8\c\2\q\9\y\u\e\d\0\4\4\l\s\k\f\o\m\l\i\9\n\p\a\t\t\z\k\9\u\n\7\4\g\w\a\y\f\9\b\k\4\p\t\7\x\s\s\j\a\x\u\5\5\u\d\4\a\4\p\1\4\l\c\z\b\s\x\o\b\1\f\w\s\e\v\6\b\n\0\i\o\5\v\5\6\h\b\c\u\z\u\z\m\i\e\f\d\r\w\k\s\e\b\0\4\3\x\c\r\9\o\z\a\i\2\o\8\a\0\a\w\j\g\6\d\2\c\n\o\v\2\z\n\4\0\d\d\8\c\w\z\4\e\r\x\q\s\r\z\o\y\i\e\n\f\w\c\1\q\p\t\y\3\c\z\5\0\7\7\i\w\5\3\6\v\m\g\s\2\4\d\c\e\e\k\9\s\5\0\d\4\n\u\8\m\w\9\g\x\j\q\1\n\l\d\q\a\d\p\l\1\6\j\h\7\l\t\z\j\x\c\d\f\b\m\i\z\9\7\r\a\8\r\s\a\6\f\7\x\e\g\f\i\g\7\z\p\i\g\k\7\2\f\a\9\5\k\a\7\3\g\w\7\0\4\c\2\g\7\7\s\u\6\a\9\p\g\u\s\a\g\p\3\o\m\3\p\e\x\x\5\d\v\n\j\x\u\m\f\h\c\3\5\m\g\v\w\7\2\u\m\8\z\f\b\q\y\d\3\e\p\c\n\v\j\y\b\0\i\t\9\1\n\a\5\2\k\b\n\c\q\v\k\n\2\i\6\n\2\o\7\d\o ]] 00:08:56.652 00:08:56.652 real 0m5.152s 00:08:56.652 user 0m4.223s 00:08:56.652 sys 0m1.196s 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:56.652 ************************************ 00:08:56.652 START TEST dd_flag_noatime 00:08:56.652 ************************************ 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:56.652 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:56.911 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:56.911 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720991287 00:08:56.911 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.911 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720991288 00:08:56.911 21:08:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:57.846 21:08:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.846 [2024-07-14 21:08:09.326717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:57.846 [2024-07-14 21:08:09.326926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64961 ] 00:08:58.104 [2024-07-14 21:08:09.500114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.387 [2024-07-14 21:08:09.707719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.387 [2024-07-14 21:08:09.850463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:59.582  Copying: 512/512 [B] (average 500 kBps) 00:08:59.582 00:08:59.582 21:08:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:59.582 21:08:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720991287 )) 00:08:59.582 21:08:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:59.582 21:08:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720991288 )) 00:08:59.582 21:08:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:59.582 [2024-07-14 21:08:11.090124] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:59.582 [2024-07-14 21:08:11.090288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64981 ] 00:08:59.841 [2024-07-14 21:08:11.262927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.100 [2024-07-14 21:08:11.496886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.360 [2024-07-14 21:08:11.684669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.736  Copying: 512/512 [B] (average 500 kBps) 00:09:01.736 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720991291 )) 00:09:01.736 00:09:01.736 real 0m4.713s 00:09:01.736 user 0m3.073s 00:09:01.736 sys 0m1.731s 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:01.736 ************************************ 00:09:01.736 END TEST dd_flag_noatime 00:09:01.736 ************************************ 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:01.736 ************************************ 00:09:01.736 START TEST dd_flags_misc 00:09:01.736 ************************************ 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.736 21:08:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:01.736 [2024-07-14 21:08:13.080985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:01.736 [2024-07-14 21:08:13.081162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65027 ] 00:09:01.736 [2024-07-14 21:08:13.257569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.994 [2024-07-14 21:08:13.497553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.252 [2024-07-14 21:08:13.683274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.628  Copying: 512/512 [B] (average 500 kBps) 00:09:03.628 00:09:03.628 21:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y8k61rfu5v0xvrgu0ac4uhu6fjp0spsmpoogm2mep9mff4zcje1fnwryf8i4k83gpi5lo0dcjbr18e7vn3uj0relsd2a3xmso7xzqs2506mimnav63o6xv2a5aejxze1jka0p2ozepb1dnsey8fj3khih2oua8dysmrcnoeup7yssrf49vlb40dfqqwmt3g8p9cw3tv5b96hg6odscwac4w72f309wd3dz2fezo02bbc8n55ldcqqv3d3vsqw6iz21bl9jgksupqm3q1vhab1j8wvidon8dfuvs2ltlr9wqlg0t82fhchapgoepgrro665xv06x8f5f2jfj2s5l8hmi7s41xm0rm77ilvrp0t4zv68uo03pqqkvidvcpdtyo92wel7ic8mm1m4v0te8m66y4jm41yszctj6mvner02t0bk56slkgc7px86dxzy70k6kvk8c5hqp1lgy18841okmwvnpu6r7cquzqyygjrtb8sjj7afqdccyodrj1w65f == \y\8\k\6\1\r\f\u\5\v\0\x\v\r\g\u\0\a\c\4\u\h\u\6\f\j\p\0\s\p\s\m\p\o\o\g\m\2\m\e\p\9\m\f\f\4\z\c\j\e\1\f\n\w\r\y\f\8\i\4\k\8\3\g\p\i\5\l\o\0\d\c\j\b\r\1\8\e\7\v\n\3\u\j\0\r\e\l\s\d\2\a\3\x\m\s\o\7\x\z\q\s\2\5\0\6\m\i\m\n\a\v\6\3\o\6\x\v\2\a\5\a\e\j\x\z\e\1\j\k\a\0\p\2\o\z\e\p\b\1\d\n\s\e\y\8\f\j\3\k\h\i\h\2\o\u\a\8\d\y\s\m\r\c\n\o\e\u\p\7\y\s\s\r\f\4\9\v\l\b\4\0\d\f\q\q\w\m\t\3\g\8\p\9\c\w\3\t\v\5\b\9\6\h\g\6\o\d\s\c\w\a\c\4\w\7\2\f\3\0\9\w\d\3\d\z\2\f\e\z\o\0\2\b\b\c\8\n\5\5\l\d\c\q\q\v\3\d\3\v\s\q\w\6\i\z\2\1\b\l\9\j\g\k\s\u\p\q\m\3\q\1\v\h\a\b\1\j\8\w\v\i\d\o\n\8\d\f\u\v\s\2\l\t\l\r\9\w\q\l\g\0\t\8\2\f\h\c\h\a\p\g\o\e\p\g\r\r\o\6\6\5\x\v\0\6\x\8\f\5\f\2\j\f\j\2\s\5\l\8\h\m\i\7\s\4\1\x\m\0\r\m\7\7\i\l\v\r\p\0\t\4\z\v\6\8\u\o\0\3\p\q\q\k\v\i\d\v\c\p\d\t\y\o\9\2\w\e\l\7\i\c\8\m\m\1\m\4\v\0\t\e\8\m\6\6\y\4\j\m\4\1\y\s\z\c\t\j\6\m\v\n\e\r\0\2\t\0\b\k\5\6\s\l\k\g\c\7\p\x\8\6\d\x\z\y\7\0\k\6\k\v\k\8\c\5\h\q\p\1\l\g\y\1\8\8\4\1\o\k\m\w\v\n\p\u\6\r\7\c\q\u\z\q\y\y\g\j\r\t\b\8\s\j\j\7\a\f\q\d\c\c\y\o\d\r\j\1\w\6\5\f ]] 00:09:03.628 21:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:03.628 21:08:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:03.628 [2024-07-14 21:08:14.991700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:03.628 [2024-07-14 21:08:14.991903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65054 ] 00:09:03.628 [2024-07-14 21:08:15.160558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.887 [2024-07-14 21:08:15.348307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.146 [2024-07-14 21:08:15.528478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.080  Copying: 512/512 [B] (average 500 kBps) 00:09:05.080 00:09:05.339 21:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y8k61rfu5v0xvrgu0ac4uhu6fjp0spsmpoogm2mep9mff4zcje1fnwryf8i4k83gpi5lo0dcjbr18e7vn3uj0relsd2a3xmso7xzqs2506mimnav63o6xv2a5aejxze1jka0p2ozepb1dnsey8fj3khih2oua8dysmrcnoeup7yssrf49vlb40dfqqwmt3g8p9cw3tv5b96hg6odscwac4w72f309wd3dz2fezo02bbc8n55ldcqqv3d3vsqw6iz21bl9jgksupqm3q1vhab1j8wvidon8dfuvs2ltlr9wqlg0t82fhchapgoepgrro665xv06x8f5f2jfj2s5l8hmi7s41xm0rm77ilvrp0t4zv68uo03pqqkvidvcpdtyo92wel7ic8mm1m4v0te8m66y4jm41yszctj6mvner02t0bk56slkgc7px86dxzy70k6kvk8c5hqp1lgy18841okmwvnpu6r7cquzqyygjrtb8sjj7afqdccyodrj1w65f == \y\8\k\6\1\r\f\u\5\v\0\x\v\r\g\u\0\a\c\4\u\h\u\6\f\j\p\0\s\p\s\m\p\o\o\g\m\2\m\e\p\9\m\f\f\4\z\c\j\e\1\f\n\w\r\y\f\8\i\4\k\8\3\g\p\i\5\l\o\0\d\c\j\b\r\1\8\e\7\v\n\3\u\j\0\r\e\l\s\d\2\a\3\x\m\s\o\7\x\z\q\s\2\5\0\6\m\i\m\n\a\v\6\3\o\6\x\v\2\a\5\a\e\j\x\z\e\1\j\k\a\0\p\2\o\z\e\p\b\1\d\n\s\e\y\8\f\j\3\k\h\i\h\2\o\u\a\8\d\y\s\m\r\c\n\o\e\u\p\7\y\s\s\r\f\4\9\v\l\b\4\0\d\f\q\q\w\m\t\3\g\8\p\9\c\w\3\t\v\5\b\9\6\h\g\6\o\d\s\c\w\a\c\4\w\7\2\f\3\0\9\w\d\3\d\z\2\f\e\z\o\0\2\b\b\c\8\n\5\5\l\d\c\q\q\v\3\d\3\v\s\q\w\6\i\z\2\1\b\l\9\j\g\k\s\u\p\q\m\3\q\1\v\h\a\b\1\j\8\w\v\i\d\o\n\8\d\f\u\v\s\2\l\t\l\r\9\w\q\l\g\0\t\8\2\f\h\c\h\a\p\g\o\e\p\g\r\r\o\6\6\5\x\v\0\6\x\8\f\5\f\2\j\f\j\2\s\5\l\8\h\m\i\7\s\4\1\x\m\0\r\m\7\7\i\l\v\r\p\0\t\4\z\v\6\8\u\o\0\3\p\q\q\k\v\i\d\v\c\p\d\t\y\o\9\2\w\e\l\7\i\c\8\m\m\1\m\4\v\0\t\e\8\m\6\6\y\4\j\m\4\1\y\s\z\c\t\j\6\m\v\n\e\r\0\2\t\0\b\k\5\6\s\l\k\g\c\7\p\x\8\6\d\x\z\y\7\0\k\6\k\v\k\8\c\5\h\q\p\1\l\g\y\1\8\8\4\1\o\k\m\w\v\n\p\u\6\r\7\c\q\u\z\q\y\y\g\j\r\t\b\8\s\j\j\7\a\f\q\d\c\c\y\o\d\r\j\1\w\6\5\f ]] 00:09:05.339 21:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:05.339 21:08:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:05.339 [2024-07-14 21:08:16.735318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:05.339 [2024-07-14 21:08:16.735487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65081 ] 00:09:05.597 [2024-07-14 21:08:16.902798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.597 [2024-07-14 21:08:17.051698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.857 [2024-07-14 21:08:17.223694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.793  Copying: 512/512 [B] (average 100 kBps) 00:09:06.793 00:09:06.793 21:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y8k61rfu5v0xvrgu0ac4uhu6fjp0spsmpoogm2mep9mff4zcje1fnwryf8i4k83gpi5lo0dcjbr18e7vn3uj0relsd2a3xmso7xzqs2506mimnav63o6xv2a5aejxze1jka0p2ozepb1dnsey8fj3khih2oua8dysmrcnoeup7yssrf49vlb40dfqqwmt3g8p9cw3tv5b96hg6odscwac4w72f309wd3dz2fezo02bbc8n55ldcqqv3d3vsqw6iz21bl9jgksupqm3q1vhab1j8wvidon8dfuvs2ltlr9wqlg0t82fhchapgoepgrro665xv06x8f5f2jfj2s5l8hmi7s41xm0rm77ilvrp0t4zv68uo03pqqkvidvcpdtyo92wel7ic8mm1m4v0te8m66y4jm41yszctj6mvner02t0bk56slkgc7px86dxzy70k6kvk8c5hqp1lgy18841okmwvnpu6r7cquzqyygjrtb8sjj7afqdccyodrj1w65f == \y\8\k\6\1\r\f\u\5\v\0\x\v\r\g\u\0\a\c\4\u\h\u\6\f\j\p\0\s\p\s\m\p\o\o\g\m\2\m\e\p\9\m\f\f\4\z\c\j\e\1\f\n\w\r\y\f\8\i\4\k\8\3\g\p\i\5\l\o\0\d\c\j\b\r\1\8\e\7\v\n\3\u\j\0\r\e\l\s\d\2\a\3\x\m\s\o\7\x\z\q\s\2\5\0\6\m\i\m\n\a\v\6\3\o\6\x\v\2\a\5\a\e\j\x\z\e\1\j\k\a\0\p\2\o\z\e\p\b\1\d\n\s\e\y\8\f\j\3\k\h\i\h\2\o\u\a\8\d\y\s\m\r\c\n\o\e\u\p\7\y\s\s\r\f\4\9\v\l\b\4\0\d\f\q\q\w\m\t\3\g\8\p\9\c\w\3\t\v\5\b\9\6\h\g\6\o\d\s\c\w\a\c\4\w\7\2\f\3\0\9\w\d\3\d\z\2\f\e\z\o\0\2\b\b\c\8\n\5\5\l\d\c\q\q\v\3\d\3\v\s\q\w\6\i\z\2\1\b\l\9\j\g\k\s\u\p\q\m\3\q\1\v\h\a\b\1\j\8\w\v\i\d\o\n\8\d\f\u\v\s\2\l\t\l\r\9\w\q\l\g\0\t\8\2\f\h\c\h\a\p\g\o\e\p\g\r\r\o\6\6\5\x\v\0\6\x\8\f\5\f\2\j\f\j\2\s\5\l\8\h\m\i\7\s\4\1\x\m\0\r\m\7\7\i\l\v\r\p\0\t\4\z\v\6\8\u\o\0\3\p\q\q\k\v\i\d\v\c\p\d\t\y\o\9\2\w\e\l\7\i\c\8\m\m\1\m\4\v\0\t\e\8\m\6\6\y\4\j\m\4\1\y\s\z\c\t\j\6\m\v\n\e\r\0\2\t\0\b\k\5\6\s\l\k\g\c\7\p\x\8\6\d\x\z\y\7\0\k\6\k\v\k\8\c\5\h\q\p\1\l\g\y\1\8\8\4\1\o\k\m\w\v\n\p\u\6\r\7\c\q\u\z\q\y\y\g\j\r\t\b\8\s\j\j\7\a\f\q\d\c\c\y\o\d\r\j\1\w\6\5\f ]] 00:09:06.793 21:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:06.793 21:08:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:07.050 [2024-07-14 21:08:18.365597] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:07.050 [2024-07-14 21:08:18.365789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65103 ] 00:09:07.050 [2024-07-14 21:08:18.534033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.308 [2024-07-14 21:08:18.702224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.566 [2024-07-14 21:08:18.860013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:08.500  Copying: 512/512 [B] (average 500 kBps) 00:09:08.500 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ y8k61rfu5v0xvrgu0ac4uhu6fjp0spsmpoogm2mep9mff4zcje1fnwryf8i4k83gpi5lo0dcjbr18e7vn3uj0relsd2a3xmso7xzqs2506mimnav63o6xv2a5aejxze1jka0p2ozepb1dnsey8fj3khih2oua8dysmrcnoeup7yssrf49vlb40dfqqwmt3g8p9cw3tv5b96hg6odscwac4w72f309wd3dz2fezo02bbc8n55ldcqqv3d3vsqw6iz21bl9jgksupqm3q1vhab1j8wvidon8dfuvs2ltlr9wqlg0t82fhchapgoepgrro665xv06x8f5f2jfj2s5l8hmi7s41xm0rm77ilvrp0t4zv68uo03pqqkvidvcpdtyo92wel7ic8mm1m4v0te8m66y4jm41yszctj6mvner02t0bk56slkgc7px86dxzy70k6kvk8c5hqp1lgy18841okmwvnpu6r7cquzqyygjrtb8sjj7afqdccyodrj1w65f == \y\8\k\6\1\r\f\u\5\v\0\x\v\r\g\u\0\a\c\4\u\h\u\6\f\j\p\0\s\p\s\m\p\o\o\g\m\2\m\e\p\9\m\f\f\4\z\c\j\e\1\f\n\w\r\y\f\8\i\4\k\8\3\g\p\i\5\l\o\0\d\c\j\b\r\1\8\e\7\v\n\3\u\j\0\r\e\l\s\d\2\a\3\x\m\s\o\7\x\z\q\s\2\5\0\6\m\i\m\n\a\v\6\3\o\6\x\v\2\a\5\a\e\j\x\z\e\1\j\k\a\0\p\2\o\z\e\p\b\1\d\n\s\e\y\8\f\j\3\k\h\i\h\2\o\u\a\8\d\y\s\m\r\c\n\o\e\u\p\7\y\s\s\r\f\4\9\v\l\b\4\0\d\f\q\q\w\m\t\3\g\8\p\9\c\w\3\t\v\5\b\9\6\h\g\6\o\d\s\c\w\a\c\4\w\7\2\f\3\0\9\w\d\3\d\z\2\f\e\z\o\0\2\b\b\c\8\n\5\5\l\d\c\q\q\v\3\d\3\v\s\q\w\6\i\z\2\1\b\l\9\j\g\k\s\u\p\q\m\3\q\1\v\h\a\b\1\j\8\w\v\i\d\o\n\8\d\f\u\v\s\2\l\t\l\r\9\w\q\l\g\0\t\8\2\f\h\c\h\a\p\g\o\e\p\g\r\r\o\6\6\5\x\v\0\6\x\8\f\5\f\2\j\f\j\2\s\5\l\8\h\m\i\7\s\4\1\x\m\0\r\m\7\7\i\l\v\r\p\0\t\4\z\v\6\8\u\o\0\3\p\q\q\k\v\i\d\v\c\p\d\t\y\o\9\2\w\e\l\7\i\c\8\m\m\1\m\4\v\0\t\e\8\m\6\6\y\4\j\m\4\1\y\s\z\c\t\j\6\m\v\n\e\r\0\2\t\0\b\k\5\6\s\l\k\g\c\7\p\x\8\6\d\x\z\y\7\0\k\6\k\v\k\8\c\5\h\q\p\1\l\g\y\1\8\8\4\1\o\k\m\w\v\n\p\u\6\r\7\c\q\u\z\q\y\y\g\j\r\t\b\8\s\j\j\7\a\f\q\d\c\c\y\o\d\r\j\1\w\6\5\f ]] 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:08.501 21:08:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:08.759 [2024-07-14 21:08:20.057615] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:08.759 [2024-07-14 21:08:20.057825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65124 ] 00:09:08.759 [2024-07-14 21:08:20.225160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.018 [2024-07-14 21:08:20.392358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.018 [2024-07-14 21:08:20.557454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:10.213  Copying: 512/512 [B] (average 500 kBps) 00:09:10.213 00:09:10.213 21:08:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ndm2t4nuqpq8mcxknc9z5vw3e1iqwdnxzd8az8vn0ackgn3fyssdyq4406k59iltr4vtt2pcclzwummlubflfcmps9dlrafdmwk6qyalx23afheeaafv85mji00gqyabnyz5pua7nkr0is6nw43e2d72uqjplrnojwth3wiekrzcmtacsmbra7ebpgsv4x4xw5xyh8lr0hk3mrpxf0iqmdu3xiqa4wp1zmxwnpiocedxmfy1var4awcuto1ns6rtwk7qbk4xy9w7pq9vbhm767r482l5gnx2ane4k4u92sw55sly9qyrooi27ntsrplsdfhqsp7wjrhax8vr7do4f44lqm1o373hy57qg7gxjmfoo8bg2ul4thtd6jvi3w6s7qebkci3gzy7wexan02q1mpgiq6zgjfpzee8erjpqxmgu09ywfay4iboim6o1of4yiuotthffz7gj35qe6cu0dz7rn16njudbmnmip992fgaq6wrbpmlqw7si1v2qdqa == \n\d\m\2\t\4\n\u\q\p\q\8\m\c\x\k\n\c\9\z\5\v\w\3\e\1\i\q\w\d\n\x\z\d\8\a\z\8\v\n\0\a\c\k\g\n\3\f\y\s\s\d\y\q\4\4\0\6\k\5\9\i\l\t\r\4\v\t\t\2\p\c\c\l\z\w\u\m\m\l\u\b\f\l\f\c\m\p\s\9\d\l\r\a\f\d\m\w\k\6\q\y\a\l\x\2\3\a\f\h\e\e\a\a\f\v\8\5\m\j\i\0\0\g\q\y\a\b\n\y\z\5\p\u\a\7\n\k\r\0\i\s\6\n\w\4\3\e\2\d\7\2\u\q\j\p\l\r\n\o\j\w\t\h\3\w\i\e\k\r\z\c\m\t\a\c\s\m\b\r\a\7\e\b\p\g\s\v\4\x\4\x\w\5\x\y\h\8\l\r\0\h\k\3\m\r\p\x\f\0\i\q\m\d\u\3\x\i\q\a\4\w\p\1\z\m\x\w\n\p\i\o\c\e\d\x\m\f\y\1\v\a\r\4\a\w\c\u\t\o\1\n\s\6\r\t\w\k\7\q\b\k\4\x\y\9\w\7\p\q\9\v\b\h\m\7\6\7\r\4\8\2\l\5\g\n\x\2\a\n\e\4\k\4\u\9\2\s\w\5\5\s\l\y\9\q\y\r\o\o\i\2\7\n\t\s\r\p\l\s\d\f\h\q\s\p\7\w\j\r\h\a\x\8\v\r\7\d\o\4\f\4\4\l\q\m\1\o\3\7\3\h\y\5\7\q\g\7\g\x\j\m\f\o\o\8\b\g\2\u\l\4\t\h\t\d\6\j\v\i\3\w\6\s\7\q\e\b\k\c\i\3\g\z\y\7\w\e\x\a\n\0\2\q\1\m\p\g\i\q\6\z\g\j\f\p\z\e\e\8\e\r\j\p\q\x\m\g\u\0\9\y\w\f\a\y\4\i\b\o\i\m\6\o\1\o\f\4\y\i\u\o\t\t\h\f\f\z\7\g\j\3\5\q\e\6\c\u\0\d\z\7\r\n\1\6\n\j\u\d\b\m\n\m\i\p\9\9\2\f\g\a\q\6\w\r\b\p\m\l\q\w\7\s\i\1\v\2\q\d\q\a ]] 00:09:10.213 21:08:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:10.213 21:08:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:10.213 [2024-07-14 21:08:21.743026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:10.213 [2024-07-14 21:08:21.743224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65146 ] 00:09:10.471 [2024-07-14 21:08:21.910105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.729 [2024-07-14 21:08:22.138885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.987 [2024-07-14 21:08:22.303273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.924  Copying: 512/512 [B] (average 500 kBps) 00:09:11.924 00:09:11.925 21:08:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ndm2t4nuqpq8mcxknc9z5vw3e1iqwdnxzd8az8vn0ackgn3fyssdyq4406k59iltr4vtt2pcclzwummlubflfcmps9dlrafdmwk6qyalx23afheeaafv85mji00gqyabnyz5pua7nkr0is6nw43e2d72uqjplrnojwth3wiekrzcmtacsmbra7ebpgsv4x4xw5xyh8lr0hk3mrpxf0iqmdu3xiqa4wp1zmxwnpiocedxmfy1var4awcuto1ns6rtwk7qbk4xy9w7pq9vbhm767r482l5gnx2ane4k4u92sw55sly9qyrooi27ntsrplsdfhqsp7wjrhax8vr7do4f44lqm1o373hy57qg7gxjmfoo8bg2ul4thtd6jvi3w6s7qebkci3gzy7wexan02q1mpgiq6zgjfpzee8erjpqxmgu09ywfay4iboim6o1of4yiuotthffz7gj35qe6cu0dz7rn16njudbmnmip992fgaq6wrbpmlqw7si1v2qdqa == \n\d\m\2\t\4\n\u\q\p\q\8\m\c\x\k\n\c\9\z\5\v\w\3\e\1\i\q\w\d\n\x\z\d\8\a\z\8\v\n\0\a\c\k\g\n\3\f\y\s\s\d\y\q\4\4\0\6\k\5\9\i\l\t\r\4\v\t\t\2\p\c\c\l\z\w\u\m\m\l\u\b\f\l\f\c\m\p\s\9\d\l\r\a\f\d\m\w\k\6\q\y\a\l\x\2\3\a\f\h\e\e\a\a\f\v\8\5\m\j\i\0\0\g\q\y\a\b\n\y\z\5\p\u\a\7\n\k\r\0\i\s\6\n\w\4\3\e\2\d\7\2\u\q\j\p\l\r\n\o\j\w\t\h\3\w\i\e\k\r\z\c\m\t\a\c\s\m\b\r\a\7\e\b\p\g\s\v\4\x\4\x\w\5\x\y\h\8\l\r\0\h\k\3\m\r\p\x\f\0\i\q\m\d\u\3\x\i\q\a\4\w\p\1\z\m\x\w\n\p\i\o\c\e\d\x\m\f\y\1\v\a\r\4\a\w\c\u\t\o\1\n\s\6\r\t\w\k\7\q\b\k\4\x\y\9\w\7\p\q\9\v\b\h\m\7\6\7\r\4\8\2\l\5\g\n\x\2\a\n\e\4\k\4\u\9\2\s\w\5\5\s\l\y\9\q\y\r\o\o\i\2\7\n\t\s\r\p\l\s\d\f\h\q\s\p\7\w\j\r\h\a\x\8\v\r\7\d\o\4\f\4\4\l\q\m\1\o\3\7\3\h\y\5\7\q\g\7\g\x\j\m\f\o\o\8\b\g\2\u\l\4\t\h\t\d\6\j\v\i\3\w\6\s\7\q\e\b\k\c\i\3\g\z\y\7\w\e\x\a\n\0\2\q\1\m\p\g\i\q\6\z\g\j\f\p\z\e\e\8\e\r\j\p\q\x\m\g\u\0\9\y\w\f\a\y\4\i\b\o\i\m\6\o\1\o\f\4\y\i\u\o\t\t\h\f\f\z\7\g\j\3\5\q\e\6\c\u\0\d\z\7\r\n\1\6\n\j\u\d\b\m\n\m\i\p\9\9\2\f\g\a\q\6\w\r\b\p\m\l\q\w\7\s\i\1\v\2\q\d\q\a ]] 00:09:11.925 21:08:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.925 21:08:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:12.184 [2024-07-14 21:08:23.513454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:12.184 [2024-07-14 21:08:23.513654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65174 ] 00:09:12.184 [2024-07-14 21:08:23.683949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.443 [2024-07-14 21:08:23.914684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.702 [2024-07-14 21:08:24.093931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.637  Copying: 512/512 [B] (average 125 kBps) 00:09:13.637 00:09:13.637 21:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ndm2t4nuqpq8mcxknc9z5vw3e1iqwdnxzd8az8vn0ackgn3fyssdyq4406k59iltr4vtt2pcclzwummlubflfcmps9dlrafdmwk6qyalx23afheeaafv85mji00gqyabnyz5pua7nkr0is6nw43e2d72uqjplrnojwth3wiekrzcmtacsmbra7ebpgsv4x4xw5xyh8lr0hk3mrpxf0iqmdu3xiqa4wp1zmxwnpiocedxmfy1var4awcuto1ns6rtwk7qbk4xy9w7pq9vbhm767r482l5gnx2ane4k4u92sw55sly9qyrooi27ntsrplsdfhqsp7wjrhax8vr7do4f44lqm1o373hy57qg7gxjmfoo8bg2ul4thtd6jvi3w6s7qebkci3gzy7wexan02q1mpgiq6zgjfpzee8erjpqxmgu09ywfay4iboim6o1of4yiuotthffz7gj35qe6cu0dz7rn16njudbmnmip992fgaq6wrbpmlqw7si1v2qdqa == \n\d\m\2\t\4\n\u\q\p\q\8\m\c\x\k\n\c\9\z\5\v\w\3\e\1\i\q\w\d\n\x\z\d\8\a\z\8\v\n\0\a\c\k\g\n\3\f\y\s\s\d\y\q\4\4\0\6\k\5\9\i\l\t\r\4\v\t\t\2\p\c\c\l\z\w\u\m\m\l\u\b\f\l\f\c\m\p\s\9\d\l\r\a\f\d\m\w\k\6\q\y\a\l\x\2\3\a\f\h\e\e\a\a\f\v\8\5\m\j\i\0\0\g\q\y\a\b\n\y\z\5\p\u\a\7\n\k\r\0\i\s\6\n\w\4\3\e\2\d\7\2\u\q\j\p\l\r\n\o\j\w\t\h\3\w\i\e\k\r\z\c\m\t\a\c\s\m\b\r\a\7\e\b\p\g\s\v\4\x\4\x\w\5\x\y\h\8\l\r\0\h\k\3\m\r\p\x\f\0\i\q\m\d\u\3\x\i\q\a\4\w\p\1\z\m\x\w\n\p\i\o\c\e\d\x\m\f\y\1\v\a\r\4\a\w\c\u\t\o\1\n\s\6\r\t\w\k\7\q\b\k\4\x\y\9\w\7\p\q\9\v\b\h\m\7\6\7\r\4\8\2\l\5\g\n\x\2\a\n\e\4\k\4\u\9\2\s\w\5\5\s\l\y\9\q\y\r\o\o\i\2\7\n\t\s\r\p\l\s\d\f\h\q\s\p\7\w\j\r\h\a\x\8\v\r\7\d\o\4\f\4\4\l\q\m\1\o\3\7\3\h\y\5\7\q\g\7\g\x\j\m\f\o\o\8\b\g\2\u\l\4\t\h\t\d\6\j\v\i\3\w\6\s\7\q\e\b\k\c\i\3\g\z\y\7\w\e\x\a\n\0\2\q\1\m\p\g\i\q\6\z\g\j\f\p\z\e\e\8\e\r\j\p\q\x\m\g\u\0\9\y\w\f\a\y\4\i\b\o\i\m\6\o\1\o\f\4\y\i\u\o\t\t\h\f\f\z\7\g\j\3\5\q\e\6\c\u\0\d\z\7\r\n\1\6\n\j\u\d\b\m\n\m\i\p\9\9\2\f\g\a\q\6\w\r\b\p\m\l\q\w\7\s\i\1\v\2\q\d\q\a ]] 00:09:13.637 21:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:13.637 21:08:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:13.896 [2024-07-14 21:08:25.241292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:13.896 [2024-07-14 21:08:25.241428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65194 ] 00:09:13.896 [2024-07-14 21:08:25.397823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.154 [2024-07-14 21:08:25.603282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.413 [2024-07-14 21:08:25.749927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.349  Copying: 512/512 [B] (average 166 kBps) 00:09:15.349 00:09:15.349 ************************************ 00:09:15.349 END TEST dd_flags_misc 00:09:15.349 ************************************ 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ndm2t4nuqpq8mcxknc9z5vw3e1iqwdnxzd8az8vn0ackgn3fyssdyq4406k59iltr4vtt2pcclzwummlubflfcmps9dlrafdmwk6qyalx23afheeaafv85mji00gqyabnyz5pua7nkr0is6nw43e2d72uqjplrnojwth3wiekrzcmtacsmbra7ebpgsv4x4xw5xyh8lr0hk3mrpxf0iqmdu3xiqa4wp1zmxwnpiocedxmfy1var4awcuto1ns6rtwk7qbk4xy9w7pq9vbhm767r482l5gnx2ane4k4u92sw55sly9qyrooi27ntsrplsdfhqsp7wjrhax8vr7do4f44lqm1o373hy57qg7gxjmfoo8bg2ul4thtd6jvi3w6s7qebkci3gzy7wexan02q1mpgiq6zgjfpzee8erjpqxmgu09ywfay4iboim6o1of4yiuotthffz7gj35qe6cu0dz7rn16njudbmnmip992fgaq6wrbpmlqw7si1v2qdqa == \n\d\m\2\t\4\n\u\q\p\q\8\m\c\x\k\n\c\9\z\5\v\w\3\e\1\i\q\w\d\n\x\z\d\8\a\z\8\v\n\0\a\c\k\g\n\3\f\y\s\s\d\y\q\4\4\0\6\k\5\9\i\l\t\r\4\v\t\t\2\p\c\c\l\z\w\u\m\m\l\u\b\f\l\f\c\m\p\s\9\d\l\r\a\f\d\m\w\k\6\q\y\a\l\x\2\3\a\f\h\e\e\a\a\f\v\8\5\m\j\i\0\0\g\q\y\a\b\n\y\z\5\p\u\a\7\n\k\r\0\i\s\6\n\w\4\3\e\2\d\7\2\u\q\j\p\l\r\n\o\j\w\t\h\3\w\i\e\k\r\z\c\m\t\a\c\s\m\b\r\a\7\e\b\p\g\s\v\4\x\4\x\w\5\x\y\h\8\l\r\0\h\k\3\m\r\p\x\f\0\i\q\m\d\u\3\x\i\q\a\4\w\p\1\z\m\x\w\n\p\i\o\c\e\d\x\m\f\y\1\v\a\r\4\a\w\c\u\t\o\1\n\s\6\r\t\w\k\7\q\b\k\4\x\y\9\w\7\p\q\9\v\b\h\m\7\6\7\r\4\8\2\l\5\g\n\x\2\a\n\e\4\k\4\u\9\2\s\w\5\5\s\l\y\9\q\y\r\o\o\i\2\7\n\t\s\r\p\l\s\d\f\h\q\s\p\7\w\j\r\h\a\x\8\v\r\7\d\o\4\f\4\4\l\q\m\1\o\3\7\3\h\y\5\7\q\g\7\g\x\j\m\f\o\o\8\b\g\2\u\l\4\t\h\t\d\6\j\v\i\3\w\6\s\7\q\e\b\k\c\i\3\g\z\y\7\w\e\x\a\n\0\2\q\1\m\p\g\i\q\6\z\g\j\f\p\z\e\e\8\e\r\j\p\q\x\m\g\u\0\9\y\w\f\a\y\4\i\b\o\i\m\6\o\1\o\f\4\y\i\u\o\t\t\h\f\f\z\7\g\j\3\5\q\e\6\c\u\0\d\z\7\r\n\1\6\n\j\u\d\b\m\n\m\i\p\9\9\2\f\g\a\q\6\w\r\b\p\m\l\q\w\7\s\i\1\v\2\q\d\q\a ]] 00:09:15.349 00:09:15.349 real 0m13.823s 00:09:15.349 user 0m11.411s 00:09:15.349 sys 0m6.483s 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:15.349 * Second test run, disabling liburing, forcing AIO 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:15.349 ************************************ 00:09:15.349 START TEST dd_flag_append_forced_aio 00:09:15.349 ************************************ 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=vppv0njx97im9y1q9jae9c5s01wxxreu 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=cwvbywsei5870phwi60qo4jsrfs2v6j9 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s vppv0njx97im9y1q9jae9c5s01wxxreu 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s cwvbywsei5870phwi60qo4jsrfs2v6j9 00:09:15.349 21:08:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:15.607 [2024-07-14 21:08:26.944716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:15.607 [2024-07-14 21:08:26.944943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65236 ] 00:09:15.607 [2024-07-14 21:08:27.112989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.865 [2024-07-14 21:08:27.380264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.123 [2024-07-14 21:08:27.615921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.330  Copying: 32/32 [B] (average 31 kBps) 00:09:17.330 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ cwvbywsei5870phwi60qo4jsrfs2v6j9vppv0njx97im9y1q9jae9c5s01wxxreu == \c\w\v\b\y\w\s\e\i\5\8\7\0\p\h\w\i\6\0\q\o\4\j\s\r\f\s\2\v\6\j\9\v\p\p\v\0\n\j\x\9\7\i\m\9\y\1\q\9\j\a\e\9\c\5\s\0\1\w\x\x\r\e\u ]] 00:09:17.330 00:09:17.330 real 0m1.856s 00:09:17.330 user 0m1.532s 00:09:17.330 sys 0m0.196s 00:09:17.330 ************************************ 00:09:17.330 END TEST dd_flag_append_forced_aio 00:09:17.330 ************************************ 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:17.330 ************************************ 00:09:17.330 START TEST dd_flag_directory_forced_aio 00:09:17.330 ************************************ 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:17.330 21:08:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:17.330 [2024-07-14 21:08:28.867308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:17.330 [2024-07-14 21:08:28.867473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65280 ] 00:09:17.606 [2024-07-14 21:08:29.038688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.870 [2024-07-14 21:08:29.211456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.870 [2024-07-14 21:08:29.358009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.129 [2024-07-14 21:08:29.432681] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:18.129 [2024-07-14 21:08:29.432780] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:18.129 [2024-07-14 21:08:29.432825] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.697 [2024-07-14 21:08:30.006522] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:18.957 21:08:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:18.957 [2024-07-14 21:08:30.488664] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:18.957 [2024-07-14 21:08:30.488889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65296 ] 00:09:19.216 [2024-07-14 21:08:30.658320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.474 [2024-07-14 21:08:30.808920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.474 [2024-07-14 21:08:30.980048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.733 [2024-07-14 21:08:31.066945] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:19.733 [2024-07-14 21:08:31.067001] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:19.733 [2024-07-14 21:08:31.067025] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.300 [2024-07-14 21:08:31.660606] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.560 00:09:20.560 real 0m3.279s 00:09:20.560 user 0m2.655s 00:09:20.560 sys 0m0.396s 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.560 ************************************ 00:09:20.560 END TEST dd_flag_directory_forced_aio 00:09:20.560 ************************************ 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:20.560 ************************************ 00:09:20.560 START TEST dd_flag_nofollow_forced_aio 00:09:20.560 ************************************ 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.560 21:08:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.819 [2024-07-14 21:08:32.180238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:20.819 [2024-07-14 21:08:32.180384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65342 ] 00:09:20.819 [2024-07-14 21:08:32.339999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.078 [2024-07-14 21:08:32.523688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.337 [2024-07-14 21:08:32.699096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.337 [2024-07-14 21:08:32.792248] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:21.337 [2024-07-14 21:08:32.792312] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:21.337 [2024-07-14 21:08:32.792335] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.903 [2024-07-14 21:08:33.393767] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.469 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.470 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.470 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.470 21:08:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:22.470 [2024-07-14 21:08:33.870823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:22.470 [2024-07-14 21:08:33.871016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65364 ] 00:09:22.729 [2024-07-14 21:08:34.041670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.729 [2024-07-14 21:08:34.190221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.987 [2024-07-14 21:08:34.337847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.987 [2024-07-14 21:08:34.412535] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:22.987 [2024-07-14 21:08:34.412612] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:22.987 [2024-07-14 21:08:34.412655] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.555 [2024-07-14 21:08:34.984237] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:23.813 21:08:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.071 [2024-07-14 21:08:35.429582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:24.071 [2024-07-14 21:08:35.429741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65383 ] 00:09:24.071 [2024-07-14 21:08:35.594473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.329 [2024-07-14 21:08:35.751101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.588 [2024-07-14 21:08:35.898682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.523  Copying: 512/512 [B] (average 500 kBps) 00:09:25.523 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ jh1ihpgtdv1sg5bg1qjg9hw3krg4b587twdrzmtxrzg5fz5xa2x93rj5nf4bzut8uof0k4a18o6g4m8yqg4wdu4ue3vhb8k5bdisk8z0904nv9bolkcv4rfxt6ajuvfj04a8iqven08grbrqc3lvgvhqcxbx2zt2hyusxjbfmerd5vx95kzgr657qqzu8j76it9f70dl2ce2fmow9u4chasjhq1euigd8vc29212pnotm0ms7wrjirpt14gwvkot6vd1oe83sk5uce5fy6ubblariaj5dde36t7a7qcqb6s1i88h02mk1zvifx12k7gwbp7j9d7bb1i5veeugxk0lwmelknv9cmeab2a1a07o0aqd5zcdpg9vumzc53x8aro2hfhmnd8dioy05ww300d7vn7h7vpmnl96da9afqvlo2aowc38k1wgbmrxy1tytttt7q6um0h8iupyz517sxo8ye5tfeetz1i03srywbwwldje35zsdxww3mm7aafalo4 == \j\h\1\i\h\p\g\t\d\v\1\s\g\5\b\g\1\q\j\g\9\h\w\3\k\r\g\4\b\5\8\7\t\w\d\r\z\m\t\x\r\z\g\5\f\z\5\x\a\2\x\9\3\r\j\5\n\f\4\b\z\u\t\8\u\o\f\0\k\4\a\1\8\o\6\g\4\m\8\y\q\g\4\w\d\u\4\u\e\3\v\h\b\8\k\5\b\d\i\s\k\8\z\0\9\0\4\n\v\9\b\o\l\k\c\v\4\r\f\x\t\6\a\j\u\v\f\j\0\4\a\8\i\q\v\e\n\0\8\g\r\b\r\q\c\3\l\v\g\v\h\q\c\x\b\x\2\z\t\2\h\y\u\s\x\j\b\f\m\e\r\d\5\v\x\9\5\k\z\g\r\6\5\7\q\q\z\u\8\j\7\6\i\t\9\f\7\0\d\l\2\c\e\2\f\m\o\w\9\u\4\c\h\a\s\j\h\q\1\e\u\i\g\d\8\v\c\2\9\2\1\2\p\n\o\t\m\0\m\s\7\w\r\j\i\r\p\t\1\4\g\w\v\k\o\t\6\v\d\1\o\e\8\3\s\k\5\u\c\e\5\f\y\6\u\b\b\l\a\r\i\a\j\5\d\d\e\3\6\t\7\a\7\q\c\q\b\6\s\1\i\8\8\h\0\2\m\k\1\z\v\i\f\x\1\2\k\7\g\w\b\p\7\j\9\d\7\b\b\1\i\5\v\e\e\u\g\x\k\0\l\w\m\e\l\k\n\v\9\c\m\e\a\b\2\a\1\a\0\7\o\0\a\q\d\5\z\c\d\p\g\9\v\u\m\z\c\5\3\x\8\a\r\o\2\h\f\h\m\n\d\8\d\i\o\y\0\5\w\w\3\0\0\d\7\v\n\7\h\7\v\p\m\n\l\9\6\d\a\9\a\f\q\v\l\o\2\a\o\w\c\3\8\k\1\w\g\b\m\r\x\y\1\t\y\t\t\t\t\7\q\6\u\m\0\h\8\i\u\p\y\z\5\1\7\s\x\o\8\y\e\5\t\f\e\e\t\z\1\i\0\3\s\r\y\w\b\w\w\l\d\j\e\3\5\z\s\d\x\w\w\3\m\m\7\a\a\f\a\l\o\4 ]] 00:09:25.523 00:09:25.523 real 0m4.835s 00:09:25.523 user 0m3.931s 00:09:25.523 sys 0m0.551s 00:09:25.523 ************************************ 00:09:25.523 END TEST dd_flag_nofollow_forced_aio 00:09:25.523 ************************************ 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 ************************************ 00:09:25.523 START TEST dd_flag_noatime_forced_aio 00:09:25.523 ************************************ 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720991315 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720991316 00:09:25.523 21:08:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:09:26.458 21:08:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:26.716 [2024-07-14 21:08:38.100078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:26.716 [2024-07-14 21:08:38.100236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65436 ] 00:09:26.974 [2024-07-14 21:08:38.277803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.974 [2024-07-14 21:08:38.477576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.233 [2024-07-14 21:08:38.624977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.168  Copying: 512/512 [B] (average 500 kBps) 00:09:28.168 00:09:28.168 21:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:28.168 21:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720991315 )) 00:09:28.168 21:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:28.168 21:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720991316 )) 00:09:28.168 21:08:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:28.427 [2024-07-14 21:08:39.777079] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.427 [2024-07-14 21:08:39.777238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65465 ] 00:09:28.427 [2024-07-14 21:08:39.947266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.685 [2024-07-14 21:08:40.103591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.944 [2024-07-14 21:08:40.285255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.880  Copying: 512/512 [B] (average 500 kBps) 00:09:29.880 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720991320 )) 00:09:30.159 00:09:30.159 real 0m4.473s 00:09:30.159 user 0m2.817s 00:09:30.159 sys 0m0.403s 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:30.159 ************************************ 00:09:30.159 END TEST dd_flag_noatime_forced_aio 00:09:30.159 ************************************ 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:30.159 ************************************ 00:09:30.159 START TEST dd_flags_misc_forced_aio 00:09:30.159 ************************************ 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:30.159 21:08:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:30.159 [2024-07-14 21:08:41.611921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:30.159 [2024-07-14 21:08:41.612102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65503 ] 00:09:30.416 [2024-07-14 21:08:41.780696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.416 [2024-07-14 21:08:41.946515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.675 [2024-07-14 21:08:42.113279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.609  Copying: 512/512 [B] (average 500 kBps) 00:09:31.609 00:09:31.610 21:08:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ etkj9hczmhljrkwfl83ewqy3iw0qdhia7ls6edg8y1d2814v6hazsignzrpxpib0x05stq44w0hv1g240l64xea7eyag3tqctglhtu1gx6iysi35erjcus48c3k4n8w8znj6fyiu8msd28t8his34d4tb6snngytjfig4osu8qksypwur83eh0qzbr0sediq9bh0jo7l9brnczqoiqochyjx3qwt0axb3gqk2oipmxoffz8mtavw2k8qviq9vuymhlwz265cwxlbum8tr1iufj84xeijsmdce6n9gs8ig16au8x6ntgx9wxux4uo12orzbf5zayn4ybimx0mah9iqyqrm09i1zvrsu9pk0g603tz1vdjh7fw11r7xo79i66ozapt8hupwf8z3qzukrs7gl6nizmarifjnbm36snd9sjdb2jfc7fc1zsa0ld7a3udads1qjtdizyf37ncha7jskwyv74h8lq3q2xd00qvls78ezjdi7jj4v6cciem4eqe == \e\t\k\j\9\h\c\z\m\h\l\j\r\k\w\f\l\8\3\e\w\q\y\3\i\w\0\q\d\h\i\a\7\l\s\6\e\d\g\8\y\1\d\2\8\1\4\v\6\h\a\z\s\i\g\n\z\r\p\x\p\i\b\0\x\0\5\s\t\q\4\4\w\0\h\v\1\g\2\4\0\l\6\4\x\e\a\7\e\y\a\g\3\t\q\c\t\g\l\h\t\u\1\g\x\6\i\y\s\i\3\5\e\r\j\c\u\s\4\8\c\3\k\4\n\8\w\8\z\n\j\6\f\y\i\u\8\m\s\d\2\8\t\8\h\i\s\3\4\d\4\t\b\6\s\n\n\g\y\t\j\f\i\g\4\o\s\u\8\q\k\s\y\p\w\u\r\8\3\e\h\0\q\z\b\r\0\s\e\d\i\q\9\b\h\0\j\o\7\l\9\b\r\n\c\z\q\o\i\q\o\c\h\y\j\x\3\q\w\t\0\a\x\b\3\g\q\k\2\o\i\p\m\x\o\f\f\z\8\m\t\a\v\w\2\k\8\q\v\i\q\9\v\u\y\m\h\l\w\z\2\6\5\c\w\x\l\b\u\m\8\t\r\1\i\u\f\j\8\4\x\e\i\j\s\m\d\c\e\6\n\9\g\s\8\i\g\1\6\a\u\8\x\6\n\t\g\x\9\w\x\u\x\4\u\o\1\2\o\r\z\b\f\5\z\a\y\n\4\y\b\i\m\x\0\m\a\h\9\i\q\y\q\r\m\0\9\i\1\z\v\r\s\u\9\p\k\0\g\6\0\3\t\z\1\v\d\j\h\7\f\w\1\1\r\7\x\o\7\9\i\6\6\o\z\a\p\t\8\h\u\p\w\f\8\z\3\q\z\u\k\r\s\7\g\l\6\n\i\z\m\a\r\i\f\j\n\b\m\3\6\s\n\d\9\s\j\d\b\2\j\f\c\7\f\c\1\z\s\a\0\l\d\7\a\3\u\d\a\d\s\1\q\j\t\d\i\z\y\f\3\7\n\c\h\a\7\j\s\k\w\y\v\7\4\h\8\l\q\3\q\2\x\d\0\0\q\v\l\s\7\8\e\z\j\d\i\7\j\j\4\v\6\c\c\i\e\m\4\e\q\e ]] 00:09:31.610 21:08:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:31.610 21:08:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:31.867 [2024-07-14 21:08:43.228428] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:31.867 [2024-07-14 21:08:43.228612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65523 ] 00:09:31.867 [2024-07-14 21:08:43.397169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.124 [2024-07-14 21:08:43.553886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.382 [2024-07-14 21:08:43.704788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.316  Copying: 512/512 [B] (average 500 kBps) 00:09:33.316 00:09:33.316 21:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ etkj9hczmhljrkwfl83ewqy3iw0qdhia7ls6edg8y1d2814v6hazsignzrpxpib0x05stq44w0hv1g240l64xea7eyag3tqctglhtu1gx6iysi35erjcus48c3k4n8w8znj6fyiu8msd28t8his34d4tb6snngytjfig4osu8qksypwur83eh0qzbr0sediq9bh0jo7l9brnczqoiqochyjx3qwt0axb3gqk2oipmxoffz8mtavw2k8qviq9vuymhlwz265cwxlbum8tr1iufj84xeijsmdce6n9gs8ig16au8x6ntgx9wxux4uo12orzbf5zayn4ybimx0mah9iqyqrm09i1zvrsu9pk0g603tz1vdjh7fw11r7xo79i66ozapt8hupwf8z3qzukrs7gl6nizmarifjnbm36snd9sjdb2jfc7fc1zsa0ld7a3udads1qjtdizyf37ncha7jskwyv74h8lq3q2xd00qvls78ezjdi7jj4v6cciem4eqe == \e\t\k\j\9\h\c\z\m\h\l\j\r\k\w\f\l\8\3\e\w\q\y\3\i\w\0\q\d\h\i\a\7\l\s\6\e\d\g\8\y\1\d\2\8\1\4\v\6\h\a\z\s\i\g\n\z\r\p\x\p\i\b\0\x\0\5\s\t\q\4\4\w\0\h\v\1\g\2\4\0\l\6\4\x\e\a\7\e\y\a\g\3\t\q\c\t\g\l\h\t\u\1\g\x\6\i\y\s\i\3\5\e\r\j\c\u\s\4\8\c\3\k\4\n\8\w\8\z\n\j\6\f\y\i\u\8\m\s\d\2\8\t\8\h\i\s\3\4\d\4\t\b\6\s\n\n\g\y\t\j\f\i\g\4\o\s\u\8\q\k\s\y\p\w\u\r\8\3\e\h\0\q\z\b\r\0\s\e\d\i\q\9\b\h\0\j\o\7\l\9\b\r\n\c\z\q\o\i\q\o\c\h\y\j\x\3\q\w\t\0\a\x\b\3\g\q\k\2\o\i\p\m\x\o\f\f\z\8\m\t\a\v\w\2\k\8\q\v\i\q\9\v\u\y\m\h\l\w\z\2\6\5\c\w\x\l\b\u\m\8\t\r\1\i\u\f\j\8\4\x\e\i\j\s\m\d\c\e\6\n\9\g\s\8\i\g\1\6\a\u\8\x\6\n\t\g\x\9\w\x\u\x\4\u\o\1\2\o\r\z\b\f\5\z\a\y\n\4\y\b\i\m\x\0\m\a\h\9\i\q\y\q\r\m\0\9\i\1\z\v\r\s\u\9\p\k\0\g\6\0\3\t\z\1\v\d\j\h\7\f\w\1\1\r\7\x\o\7\9\i\6\6\o\z\a\p\t\8\h\u\p\w\f\8\z\3\q\z\u\k\r\s\7\g\l\6\n\i\z\m\a\r\i\f\j\n\b\m\3\6\s\n\d\9\s\j\d\b\2\j\f\c\7\f\c\1\z\s\a\0\l\d\7\a\3\u\d\a\d\s\1\q\j\t\d\i\z\y\f\3\7\n\c\h\a\7\j\s\k\w\y\v\7\4\h\8\l\q\3\q\2\x\d\0\0\q\v\l\s\7\8\e\z\j\d\i\7\j\j\4\v\6\c\c\i\e\m\4\e\q\e ]] 00:09:33.316 21:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:33.316 21:08:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:33.316 [2024-07-14 21:08:44.843356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:33.316 [2024-07-14 21:08:44.843538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65542 ] 00:09:33.574 [2024-07-14 21:08:45.014300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.832 [2024-07-14 21:08:45.165610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.832 [2024-07-14 21:08:45.323923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.042  Copying: 512/512 [B] (average 166 kBps) 00:09:35.042 00:09:35.042 21:08:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ etkj9hczmhljrkwfl83ewqy3iw0qdhia7ls6edg8y1d2814v6hazsignzrpxpib0x05stq44w0hv1g240l64xea7eyag3tqctglhtu1gx6iysi35erjcus48c3k4n8w8znj6fyiu8msd28t8his34d4tb6snngytjfig4osu8qksypwur83eh0qzbr0sediq9bh0jo7l9brnczqoiqochyjx3qwt0axb3gqk2oipmxoffz8mtavw2k8qviq9vuymhlwz265cwxlbum8tr1iufj84xeijsmdce6n9gs8ig16au8x6ntgx9wxux4uo12orzbf5zayn4ybimx0mah9iqyqrm09i1zvrsu9pk0g603tz1vdjh7fw11r7xo79i66ozapt8hupwf8z3qzukrs7gl6nizmarifjnbm36snd9sjdb2jfc7fc1zsa0ld7a3udads1qjtdizyf37ncha7jskwyv74h8lq3q2xd00qvls78ezjdi7jj4v6cciem4eqe == \e\t\k\j\9\h\c\z\m\h\l\j\r\k\w\f\l\8\3\e\w\q\y\3\i\w\0\q\d\h\i\a\7\l\s\6\e\d\g\8\y\1\d\2\8\1\4\v\6\h\a\z\s\i\g\n\z\r\p\x\p\i\b\0\x\0\5\s\t\q\4\4\w\0\h\v\1\g\2\4\0\l\6\4\x\e\a\7\e\y\a\g\3\t\q\c\t\g\l\h\t\u\1\g\x\6\i\y\s\i\3\5\e\r\j\c\u\s\4\8\c\3\k\4\n\8\w\8\z\n\j\6\f\y\i\u\8\m\s\d\2\8\t\8\h\i\s\3\4\d\4\t\b\6\s\n\n\g\y\t\j\f\i\g\4\o\s\u\8\q\k\s\y\p\w\u\r\8\3\e\h\0\q\z\b\r\0\s\e\d\i\q\9\b\h\0\j\o\7\l\9\b\r\n\c\z\q\o\i\q\o\c\h\y\j\x\3\q\w\t\0\a\x\b\3\g\q\k\2\o\i\p\m\x\o\f\f\z\8\m\t\a\v\w\2\k\8\q\v\i\q\9\v\u\y\m\h\l\w\z\2\6\5\c\w\x\l\b\u\m\8\t\r\1\i\u\f\j\8\4\x\e\i\j\s\m\d\c\e\6\n\9\g\s\8\i\g\1\6\a\u\8\x\6\n\t\g\x\9\w\x\u\x\4\u\o\1\2\o\r\z\b\f\5\z\a\y\n\4\y\b\i\m\x\0\m\a\h\9\i\q\y\q\r\m\0\9\i\1\z\v\r\s\u\9\p\k\0\g\6\0\3\t\z\1\v\d\j\h\7\f\w\1\1\r\7\x\o\7\9\i\6\6\o\z\a\p\t\8\h\u\p\w\f\8\z\3\q\z\u\k\r\s\7\g\l\6\n\i\z\m\a\r\i\f\j\n\b\m\3\6\s\n\d\9\s\j\d\b\2\j\f\c\7\f\c\1\z\s\a\0\l\d\7\a\3\u\d\a\d\s\1\q\j\t\d\i\z\y\f\3\7\n\c\h\a\7\j\s\k\w\y\v\7\4\h\8\l\q\3\q\2\x\d\0\0\q\v\l\s\7\8\e\z\j\d\i\7\j\j\4\v\6\c\c\i\e\m\4\e\q\e ]] 00:09:35.042 21:08:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:35.042 21:08:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:35.042 [2024-07-14 21:08:46.505114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:35.042 [2024-07-14 21:08:46.505289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65562 ] 00:09:35.300 [2024-07-14 21:08:46.675486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.559 [2024-07-14 21:08:46.853721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.559 [2024-07-14 21:08:47.018190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.936  Copying: 512/512 [B] (average 166 kBps) 00:09:36.936 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ etkj9hczmhljrkwfl83ewqy3iw0qdhia7ls6edg8y1d2814v6hazsignzrpxpib0x05stq44w0hv1g240l64xea7eyag3tqctglhtu1gx6iysi35erjcus48c3k4n8w8znj6fyiu8msd28t8his34d4tb6snngytjfig4osu8qksypwur83eh0qzbr0sediq9bh0jo7l9brnczqoiqochyjx3qwt0axb3gqk2oipmxoffz8mtavw2k8qviq9vuymhlwz265cwxlbum8tr1iufj84xeijsmdce6n9gs8ig16au8x6ntgx9wxux4uo12orzbf5zayn4ybimx0mah9iqyqrm09i1zvrsu9pk0g603tz1vdjh7fw11r7xo79i66ozapt8hupwf8z3qzukrs7gl6nizmarifjnbm36snd9sjdb2jfc7fc1zsa0ld7a3udads1qjtdizyf37ncha7jskwyv74h8lq3q2xd00qvls78ezjdi7jj4v6cciem4eqe == \e\t\k\j\9\h\c\z\m\h\l\j\r\k\w\f\l\8\3\e\w\q\y\3\i\w\0\q\d\h\i\a\7\l\s\6\e\d\g\8\y\1\d\2\8\1\4\v\6\h\a\z\s\i\g\n\z\r\p\x\p\i\b\0\x\0\5\s\t\q\4\4\w\0\h\v\1\g\2\4\0\l\6\4\x\e\a\7\e\y\a\g\3\t\q\c\t\g\l\h\t\u\1\g\x\6\i\y\s\i\3\5\e\r\j\c\u\s\4\8\c\3\k\4\n\8\w\8\z\n\j\6\f\y\i\u\8\m\s\d\2\8\t\8\h\i\s\3\4\d\4\t\b\6\s\n\n\g\y\t\j\f\i\g\4\o\s\u\8\q\k\s\y\p\w\u\r\8\3\e\h\0\q\z\b\r\0\s\e\d\i\q\9\b\h\0\j\o\7\l\9\b\r\n\c\z\q\o\i\q\o\c\h\y\j\x\3\q\w\t\0\a\x\b\3\g\q\k\2\o\i\p\m\x\o\f\f\z\8\m\t\a\v\w\2\k\8\q\v\i\q\9\v\u\y\m\h\l\w\z\2\6\5\c\w\x\l\b\u\m\8\t\r\1\i\u\f\j\8\4\x\e\i\j\s\m\d\c\e\6\n\9\g\s\8\i\g\1\6\a\u\8\x\6\n\t\g\x\9\w\x\u\x\4\u\o\1\2\o\r\z\b\f\5\z\a\y\n\4\y\b\i\m\x\0\m\a\h\9\i\q\y\q\r\m\0\9\i\1\z\v\r\s\u\9\p\k\0\g\6\0\3\t\z\1\v\d\j\h\7\f\w\1\1\r\7\x\o\7\9\i\6\6\o\z\a\p\t\8\h\u\p\w\f\8\z\3\q\z\u\k\r\s\7\g\l\6\n\i\z\m\a\r\i\f\j\n\b\m\3\6\s\n\d\9\s\j\d\b\2\j\f\c\7\f\c\1\z\s\a\0\l\d\7\a\3\u\d\a\d\s\1\q\j\t\d\i\z\y\f\3\7\n\c\h\a\7\j\s\k\w\y\v\7\4\h\8\l\q\3\q\2\x\d\0\0\q\v\l\s\7\8\e\z\j\d\i\7\j\j\4\v\6\c\c\i\e\m\4\e\q\e ]] 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:36.936 21:08:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:36.936 [2024-07-14 21:08:48.260542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:36.936 [2024-07-14 21:08:48.260676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65587 ] 00:09:36.936 [2024-07-14 21:08:48.414326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.195 [2024-07-14 21:08:48.588724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.195 [2024-07-14 21:08:48.741098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.391  Copying: 512/512 [B] (average 500 kBps) 00:09:38.391 00:09:38.391 21:08:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gbg3rf1onpj00n2u5g144humnx2dhjr2s2x4orepgre1sel3dsbl5zaal6o1mnchpv0k6n95751jm3xeqoj9h68z3bthhsq4jz6aylimrcurfgflx0jvrod1nl31z18ok8rq4vnx620vxzxo1mkk0gymj6rhg2aa8stumzc1p5fifijz0ck8mza4nimban90jvpllqbdhd7jrwil77butgu6dozuwcsk2p0ackuvghr1bhn57p3ufztltyjcj9g1yrl4qi3jmpnyzqgfa8260pxibdcvgteqpr1rdld3s9vvxhkibe4gpp9e7r07p1ygxp7x8ktiylhkhvvsdt0h41p4dsaicyn8gep7fie2tmj4lxehbg698up3p1vb9bcu39c51mo27ik8uqmlmtdcgfmdcg1202eld7y24ztgz7m169i2iotyt71lffo38q8gbsbe3uup6610ua557r65vsa16aw9mpgoqn1af334ioerim39h9jaee9d8662c2am == \g\b\g\3\r\f\1\o\n\p\j\0\0\n\2\u\5\g\1\4\4\h\u\m\n\x\2\d\h\j\r\2\s\2\x\4\o\r\e\p\g\r\e\1\s\e\l\3\d\s\b\l\5\z\a\a\l\6\o\1\m\n\c\h\p\v\0\k\6\n\9\5\7\5\1\j\m\3\x\e\q\o\j\9\h\6\8\z\3\b\t\h\h\s\q\4\j\z\6\a\y\l\i\m\r\c\u\r\f\g\f\l\x\0\j\v\r\o\d\1\n\l\3\1\z\1\8\o\k\8\r\q\4\v\n\x\6\2\0\v\x\z\x\o\1\m\k\k\0\g\y\m\j\6\r\h\g\2\a\a\8\s\t\u\m\z\c\1\p\5\f\i\f\i\j\z\0\c\k\8\m\z\a\4\n\i\m\b\a\n\9\0\j\v\p\l\l\q\b\d\h\d\7\j\r\w\i\l\7\7\b\u\t\g\u\6\d\o\z\u\w\c\s\k\2\p\0\a\c\k\u\v\g\h\r\1\b\h\n\5\7\p\3\u\f\z\t\l\t\y\j\c\j\9\g\1\y\r\l\4\q\i\3\j\m\p\n\y\z\q\g\f\a\8\2\6\0\p\x\i\b\d\c\v\g\t\e\q\p\r\1\r\d\l\d\3\s\9\v\v\x\h\k\i\b\e\4\g\p\p\9\e\7\r\0\7\p\1\y\g\x\p\7\x\8\k\t\i\y\l\h\k\h\v\v\s\d\t\0\h\4\1\p\4\d\s\a\i\c\y\n\8\g\e\p\7\f\i\e\2\t\m\j\4\l\x\e\h\b\g\6\9\8\u\p\3\p\1\v\b\9\b\c\u\3\9\c\5\1\m\o\2\7\i\k\8\u\q\m\l\m\t\d\c\g\f\m\d\c\g\1\2\0\2\e\l\d\7\y\2\4\z\t\g\z\7\m\1\6\9\i\2\i\o\t\y\t\7\1\l\f\f\o\3\8\q\8\g\b\s\b\e\3\u\u\p\6\6\1\0\u\a\5\5\7\r\6\5\v\s\a\1\6\a\w\9\m\p\g\o\q\n\1\a\f\3\3\4\i\o\e\r\i\m\3\9\h\9\j\a\e\e\9\d\8\6\6\2\c\2\a\m ]] 00:09:38.391 21:08:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:38.391 21:08:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:38.391 [2024-07-14 21:08:49.856311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:38.391 [2024-07-14 21:08:49.856444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65601 ] 00:09:38.650 [2024-07-14 21:08:50.008726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.650 [2024-07-14 21:08:50.185728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.909 [2024-07-14 21:08:50.348313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:40.287  Copying: 512/512 [B] (average 500 kBps) 00:09:40.287 00:09:40.287 21:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gbg3rf1onpj00n2u5g144humnx2dhjr2s2x4orepgre1sel3dsbl5zaal6o1mnchpv0k6n95751jm3xeqoj9h68z3bthhsq4jz6aylimrcurfgflx0jvrod1nl31z18ok8rq4vnx620vxzxo1mkk0gymj6rhg2aa8stumzc1p5fifijz0ck8mza4nimban90jvpllqbdhd7jrwil77butgu6dozuwcsk2p0ackuvghr1bhn57p3ufztltyjcj9g1yrl4qi3jmpnyzqgfa8260pxibdcvgteqpr1rdld3s9vvxhkibe4gpp9e7r07p1ygxp7x8ktiylhkhvvsdt0h41p4dsaicyn8gep7fie2tmj4lxehbg698up3p1vb9bcu39c51mo27ik8uqmlmtdcgfmdcg1202eld7y24ztgz7m169i2iotyt71lffo38q8gbsbe3uup6610ua557r65vsa16aw9mpgoqn1af334ioerim39h9jaee9d8662c2am == \g\b\g\3\r\f\1\o\n\p\j\0\0\n\2\u\5\g\1\4\4\h\u\m\n\x\2\d\h\j\r\2\s\2\x\4\o\r\e\p\g\r\e\1\s\e\l\3\d\s\b\l\5\z\a\a\l\6\o\1\m\n\c\h\p\v\0\k\6\n\9\5\7\5\1\j\m\3\x\e\q\o\j\9\h\6\8\z\3\b\t\h\h\s\q\4\j\z\6\a\y\l\i\m\r\c\u\r\f\g\f\l\x\0\j\v\r\o\d\1\n\l\3\1\z\1\8\o\k\8\r\q\4\v\n\x\6\2\0\v\x\z\x\o\1\m\k\k\0\g\y\m\j\6\r\h\g\2\a\a\8\s\t\u\m\z\c\1\p\5\f\i\f\i\j\z\0\c\k\8\m\z\a\4\n\i\m\b\a\n\9\0\j\v\p\l\l\q\b\d\h\d\7\j\r\w\i\l\7\7\b\u\t\g\u\6\d\o\z\u\w\c\s\k\2\p\0\a\c\k\u\v\g\h\r\1\b\h\n\5\7\p\3\u\f\z\t\l\t\y\j\c\j\9\g\1\y\r\l\4\q\i\3\j\m\p\n\y\z\q\g\f\a\8\2\6\0\p\x\i\b\d\c\v\g\t\e\q\p\r\1\r\d\l\d\3\s\9\v\v\x\h\k\i\b\e\4\g\p\p\9\e\7\r\0\7\p\1\y\g\x\p\7\x\8\k\t\i\y\l\h\k\h\v\v\s\d\t\0\h\4\1\p\4\d\s\a\i\c\y\n\8\g\e\p\7\f\i\e\2\t\m\j\4\l\x\e\h\b\g\6\9\8\u\p\3\p\1\v\b\9\b\c\u\3\9\c\5\1\m\o\2\7\i\k\8\u\q\m\l\m\t\d\c\g\f\m\d\c\g\1\2\0\2\e\l\d\7\y\2\4\z\t\g\z\7\m\1\6\9\i\2\i\o\t\y\t\7\1\l\f\f\o\3\8\q\8\g\b\s\b\e\3\u\u\p\6\6\1\0\u\a\5\5\7\r\6\5\v\s\a\1\6\a\w\9\m\p\g\o\q\n\1\a\f\3\3\4\i\o\e\r\i\m\3\9\h\9\j\a\e\e\9\d\8\6\6\2\c\2\a\m ]] 00:09:40.287 21:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:40.287 21:08:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:40.287 [2024-07-14 21:08:51.541551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:40.287 [2024-07-14 21:08:51.541717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65626 ] 00:09:40.287 [2024-07-14 21:08:51.706496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.547 [2024-07-14 21:08:51.874135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.547 [2024-07-14 21:08:52.031773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:41.744  Copying: 512/512 [B] (average 250 kBps) 00:09:41.744 00:09:41.744 21:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gbg3rf1onpj00n2u5g144humnx2dhjr2s2x4orepgre1sel3dsbl5zaal6o1mnchpv0k6n95751jm3xeqoj9h68z3bthhsq4jz6aylimrcurfgflx0jvrod1nl31z18ok8rq4vnx620vxzxo1mkk0gymj6rhg2aa8stumzc1p5fifijz0ck8mza4nimban90jvpllqbdhd7jrwil77butgu6dozuwcsk2p0ackuvghr1bhn57p3ufztltyjcj9g1yrl4qi3jmpnyzqgfa8260pxibdcvgteqpr1rdld3s9vvxhkibe4gpp9e7r07p1ygxp7x8ktiylhkhvvsdt0h41p4dsaicyn8gep7fie2tmj4lxehbg698up3p1vb9bcu39c51mo27ik8uqmlmtdcgfmdcg1202eld7y24ztgz7m169i2iotyt71lffo38q8gbsbe3uup6610ua557r65vsa16aw9mpgoqn1af334ioerim39h9jaee9d8662c2am == \g\b\g\3\r\f\1\o\n\p\j\0\0\n\2\u\5\g\1\4\4\h\u\m\n\x\2\d\h\j\r\2\s\2\x\4\o\r\e\p\g\r\e\1\s\e\l\3\d\s\b\l\5\z\a\a\l\6\o\1\m\n\c\h\p\v\0\k\6\n\9\5\7\5\1\j\m\3\x\e\q\o\j\9\h\6\8\z\3\b\t\h\h\s\q\4\j\z\6\a\y\l\i\m\r\c\u\r\f\g\f\l\x\0\j\v\r\o\d\1\n\l\3\1\z\1\8\o\k\8\r\q\4\v\n\x\6\2\0\v\x\z\x\o\1\m\k\k\0\g\y\m\j\6\r\h\g\2\a\a\8\s\t\u\m\z\c\1\p\5\f\i\f\i\j\z\0\c\k\8\m\z\a\4\n\i\m\b\a\n\9\0\j\v\p\l\l\q\b\d\h\d\7\j\r\w\i\l\7\7\b\u\t\g\u\6\d\o\z\u\w\c\s\k\2\p\0\a\c\k\u\v\g\h\r\1\b\h\n\5\7\p\3\u\f\z\t\l\t\y\j\c\j\9\g\1\y\r\l\4\q\i\3\j\m\p\n\y\z\q\g\f\a\8\2\6\0\p\x\i\b\d\c\v\g\t\e\q\p\r\1\r\d\l\d\3\s\9\v\v\x\h\k\i\b\e\4\g\p\p\9\e\7\r\0\7\p\1\y\g\x\p\7\x\8\k\t\i\y\l\h\k\h\v\v\s\d\t\0\h\4\1\p\4\d\s\a\i\c\y\n\8\g\e\p\7\f\i\e\2\t\m\j\4\l\x\e\h\b\g\6\9\8\u\p\3\p\1\v\b\9\b\c\u\3\9\c\5\1\m\o\2\7\i\k\8\u\q\m\l\m\t\d\c\g\f\m\d\c\g\1\2\0\2\e\l\d\7\y\2\4\z\t\g\z\7\m\1\6\9\i\2\i\o\t\y\t\7\1\l\f\f\o\3\8\q\8\g\b\s\b\e\3\u\u\p\6\6\1\0\u\a\5\5\7\r\6\5\v\s\a\1\6\a\w\9\m\p\g\o\q\n\1\a\f\3\3\4\i\o\e\r\i\m\3\9\h\9\j\a\e\e\9\d\8\6\6\2\c\2\a\m ]] 00:09:41.744 21:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:41.745 21:08:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:41.745 [2024-07-14 21:08:53.199488] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:41.745 [2024-07-14 21:08:53.199620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65646 ] 00:09:42.004 [2024-07-14 21:08:53.358382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.004 [2024-07-14 21:08:53.527910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.262 [2024-07-14 21:08:53.679411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.639  Copying: 512/512 [B] (average 250 kBps) 00:09:43.639 00:09:43.639 ************************************ 00:09:43.639 END TEST dd_flags_misc_forced_aio 00:09:43.639 ************************************ 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gbg3rf1onpj00n2u5g144humnx2dhjr2s2x4orepgre1sel3dsbl5zaal6o1mnchpv0k6n95751jm3xeqoj9h68z3bthhsq4jz6aylimrcurfgflx0jvrod1nl31z18ok8rq4vnx620vxzxo1mkk0gymj6rhg2aa8stumzc1p5fifijz0ck8mza4nimban90jvpllqbdhd7jrwil77butgu6dozuwcsk2p0ackuvghr1bhn57p3ufztltyjcj9g1yrl4qi3jmpnyzqgfa8260pxibdcvgteqpr1rdld3s9vvxhkibe4gpp9e7r07p1ygxp7x8ktiylhkhvvsdt0h41p4dsaicyn8gep7fie2tmj4lxehbg698up3p1vb9bcu39c51mo27ik8uqmlmtdcgfmdcg1202eld7y24ztgz7m169i2iotyt71lffo38q8gbsbe3uup6610ua557r65vsa16aw9mpgoqn1af334ioerim39h9jaee9d8662c2am == \g\b\g\3\r\f\1\o\n\p\j\0\0\n\2\u\5\g\1\4\4\h\u\m\n\x\2\d\h\j\r\2\s\2\x\4\o\r\e\p\g\r\e\1\s\e\l\3\d\s\b\l\5\z\a\a\l\6\o\1\m\n\c\h\p\v\0\k\6\n\9\5\7\5\1\j\m\3\x\e\q\o\j\9\h\6\8\z\3\b\t\h\h\s\q\4\j\z\6\a\y\l\i\m\r\c\u\r\f\g\f\l\x\0\j\v\r\o\d\1\n\l\3\1\z\1\8\o\k\8\r\q\4\v\n\x\6\2\0\v\x\z\x\o\1\m\k\k\0\g\y\m\j\6\r\h\g\2\a\a\8\s\t\u\m\z\c\1\p\5\f\i\f\i\j\z\0\c\k\8\m\z\a\4\n\i\m\b\a\n\9\0\j\v\p\l\l\q\b\d\h\d\7\j\r\w\i\l\7\7\b\u\t\g\u\6\d\o\z\u\w\c\s\k\2\p\0\a\c\k\u\v\g\h\r\1\b\h\n\5\7\p\3\u\f\z\t\l\t\y\j\c\j\9\g\1\y\r\l\4\q\i\3\j\m\p\n\y\z\q\g\f\a\8\2\6\0\p\x\i\b\d\c\v\g\t\e\q\p\r\1\r\d\l\d\3\s\9\v\v\x\h\k\i\b\e\4\g\p\p\9\e\7\r\0\7\p\1\y\g\x\p\7\x\8\k\t\i\y\l\h\k\h\v\v\s\d\t\0\h\4\1\p\4\d\s\a\i\c\y\n\8\g\e\p\7\f\i\e\2\t\m\j\4\l\x\e\h\b\g\6\9\8\u\p\3\p\1\v\b\9\b\c\u\3\9\c\5\1\m\o\2\7\i\k\8\u\q\m\l\m\t\d\c\g\f\m\d\c\g\1\2\0\2\e\l\d\7\y\2\4\z\t\g\z\7\m\1\6\9\i\2\i\o\t\y\t\7\1\l\f\f\o\3\8\q\8\g\b\s\b\e\3\u\u\p\6\6\1\0\u\a\5\5\7\r\6\5\v\s\a\1\6\a\w\9\m\p\g\o\q\n\1\a\f\3\3\4\i\o\e\r\i\m\3\9\h\9\j\a\e\e\9\d\8\6\6\2\c\2\a\m ]] 00:09:43.639 00:09:43.639 real 0m13.271s 00:09:43.639 user 0m10.803s 00:09:43.639 sys 0m1.464s 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:43.639 ************************************ 00:09:43.639 END TEST spdk_dd_posix 00:09:43.639 ************************************ 00:09:43.639 00:09:43.639 real 0m56.986s 00:09:43.639 user 0m44.720s 00:09:43.639 sys 0m13.953s 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.639 21:08:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:43.639 21:08:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:43.639 21:08:54 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:43.639 21:08:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:43.639 21:08:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.639 21:08:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:43.639 ************************************ 00:09:43.639 START TEST spdk_dd_malloc 00:09:43.639 ************************************ 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:43.639 * Looking for test storage... 00:09:43.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.639 21:08:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:43.640 ************************************ 00:09:43.640 START TEST dd_malloc_copy 00:09:43.640 ************************************ 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:43.640 21:08:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:43.640 { 00:09:43.640 "subsystems": [ 00:09:43.640 { 00:09:43.640 "subsystem": "bdev", 00:09:43.640 "config": [ 00:09:43.640 { 00:09:43.640 "params": { 00:09:43.640 "block_size": 512, 00:09:43.640 "num_blocks": 1048576, 00:09:43.640 "name": "malloc0" 00:09:43.640 }, 00:09:43.640 "method": "bdev_malloc_create" 00:09:43.640 }, 00:09:43.640 { 00:09:43.640 "params": { 00:09:43.640 "block_size": 512, 00:09:43.640 "num_blocks": 1048576, 00:09:43.640 "name": "malloc1" 00:09:43.640 }, 00:09:43.640 "method": "bdev_malloc_create" 00:09:43.640 }, 00:09:43.640 { 00:09:43.640 "method": "bdev_wait_for_examine" 00:09:43.640 } 00:09:43.640 ] 00:09:43.640 } 00:09:43.640 ] 00:09:43.640 } 00:09:43.640 [2024-07-14 21:08:55.068974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:43.640 [2024-07-14 21:08:55.069149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65727 ] 00:09:43.899 [2024-07-14 21:08:55.240990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.158 [2024-07-14 21:08:55.470772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.158 [2024-07-14 21:08:55.649360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.502  Copying: 156/512 [MB] (156 MBps) Copying: 339/512 [MB] (183 MBps) Copying: 512/512 [MB] (average 173 MBps) 00:09:51.502 00:09:51.502 21:09:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:51.502 21:09:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:51.502 21:09:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:51.502 21:09:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:51.502 { 00:09:51.502 "subsystems": [ 00:09:51.502 { 00:09:51.502 "subsystem": "bdev", 00:09:51.502 "config": [ 00:09:51.502 { 00:09:51.502 "params": { 00:09:51.502 "block_size": 512, 00:09:51.502 "num_blocks": 1048576, 00:09:51.502 "name": "malloc0" 00:09:51.502 }, 00:09:51.502 "method": "bdev_malloc_create" 00:09:51.502 }, 00:09:51.502 { 00:09:51.502 "params": { 00:09:51.502 "block_size": 512, 00:09:51.502 "num_blocks": 1048576, 00:09:51.502 "name": "malloc1" 00:09:51.502 }, 00:09:51.502 "method": "bdev_malloc_create" 00:09:51.502 }, 00:09:51.502 { 00:09:51.502 "method": "bdev_wait_for_examine" 00:09:51.502 } 00:09:51.502 ] 00:09:51.502 } 00:09:51.502 ] 00:09:51.502 } 00:09:51.502 [2024-07-14 21:09:02.798180] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:51.502 [2024-07-14 21:09:02.798324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65823 ] 00:09:51.502 [2024-07-14 21:09:02.954421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.761 [2024-07-14 21:09:03.128191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.019 [2024-07-14 21:09:03.313119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.786  Copying: 142/512 [MB] (142 MBps) Copying: 286/512 [MB] (144 MBps) Copying: 428/512 [MB] (141 MBps) Copying: 512/512 [MB] (average 143 MBps) 00:09:59.786 00:09:59.786 ************************************ 00:09:59.786 END TEST dd_malloc_copy 00:09:59.786 ************************************ 00:09:59.786 00:09:59.786 real 0m16.047s 00:09:59.786 user 0m14.983s 00:09:59.786 sys 0m0.854s 00:09:59.786 21:09:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.786 21:09:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:59.786 21:09:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:09:59.786 00:09:59.786 real 0m16.185s 00:09:59.786 user 0m15.041s 00:09:59.786 sys 0m0.933s 00:09:59.786 21:09:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.786 ************************************ 00:09:59.786 21:09:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:59.786 END TEST spdk_dd_malloc 00:09:59.786 ************************************ 00:09:59.786 21:09:11 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:59.786 21:09:11 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:59.786 21:09:11 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:59.786 21:09:11 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.786 21:09:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:59.786 ************************************ 00:09:59.786 START TEST spdk_dd_bdev_to_bdev 00:09:59.786 ************************************ 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:59.786 * Looking for test storage... 00:09:59.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:59.786 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.787 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:59.787 ************************************ 00:09:59.787 START TEST dd_inflate_file 00:09:59.787 ************************************ 00:09:59.787 21:09:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:59.787 [2024-07-14 21:09:11.302600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:59.787 [2024-07-14 21:09:11.302800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65978 ] 00:10:00.045 [2024-07-14 21:09:11.473911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.304 [2024-07-14 21:09:11.695652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.563 [2024-07-14 21:09:11.868669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:01.497  Copying: 64/64 [MB] (average 1729 MBps) 00:10:01.498 00:10:01.498 00:10:01.498 real 0m1.792s 00:10:01.498 user 0m1.492s 00:10:01.498 sys 0m0.871s 00:10:01.498 21:09:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.498 21:09:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:10:01.498 ************************************ 00:10:01.498 END TEST dd_inflate_file 00:10:01.498 ************************************ 00:10:01.498 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:01.498 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:10:01.498 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:01.757 ************************************ 00:10:01.757 START TEST dd_copy_to_out_bdev 00:10:01.757 ************************************ 00:10:01.757 21:09:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:01.757 { 00:10:01.757 "subsystems": [ 00:10:01.757 { 00:10:01.757 "subsystem": "bdev", 00:10:01.757 "config": [ 00:10:01.757 { 00:10:01.757 "params": { 00:10:01.757 "trtype": "pcie", 00:10:01.757 "traddr": "0000:00:10.0", 00:10:01.757 "name": "Nvme0" 00:10:01.757 }, 00:10:01.757 "method": "bdev_nvme_attach_controller" 00:10:01.757 }, 00:10:01.757 { 00:10:01.757 "params": { 00:10:01.757 "trtype": "pcie", 00:10:01.757 "traddr": "0000:00:11.0", 00:10:01.757 "name": "Nvme1" 00:10:01.757 }, 00:10:01.757 "method": "bdev_nvme_attach_controller" 00:10:01.757 }, 00:10:01.757 { 00:10:01.757 "method": "bdev_wait_for_examine" 00:10:01.757 } 00:10:01.757 ] 00:10:01.757 } 00:10:01.757 ] 00:10:01.757 } 00:10:01.757 [2024-07-14 21:09:13.154264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:01.757 [2024-07-14 21:09:13.154452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66029 ] 00:10:02.015 [2024-07-14 21:09:13.325850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.015 [2024-07-14 21:09:13.552525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.273 [2024-07-14 21:09:13.716458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.841  Copying: 47/64 [MB] (47 MBps) Copying: 64/64 [MB] (average 46 MBps) 00:10:04.841 00:10:04.841 00:10:04.841 real 0m3.328s 00:10:04.841 user 0m3.030s 00:10:04.841 sys 0m2.273s 00:10:04.841 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.841 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:04.841 ************************************ 00:10:04.841 END TEST dd_copy_to_out_bdev 00:10:04.841 ************************************ 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:05.100 ************************************ 00:10:05.100 START TEST dd_offset_magic 00:10:05.100 ************************************ 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:05.100 21:09:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:05.100 { 00:10:05.100 "subsystems": [ 00:10:05.100 { 00:10:05.100 "subsystem": "bdev", 00:10:05.100 "config": [ 00:10:05.100 { 00:10:05.100 "params": { 00:10:05.100 "trtype": "pcie", 00:10:05.100 "traddr": "0000:00:10.0", 00:10:05.100 "name": "Nvme0" 00:10:05.100 }, 00:10:05.100 "method": "bdev_nvme_attach_controller" 00:10:05.100 }, 00:10:05.100 { 00:10:05.100 "params": { 00:10:05.100 "trtype": "pcie", 00:10:05.100 "traddr": "0000:00:11.0", 00:10:05.100 "name": "Nvme1" 00:10:05.100 }, 00:10:05.100 "method": "bdev_nvme_attach_controller" 00:10:05.100 }, 00:10:05.100 { 00:10:05.100 "method": "bdev_wait_for_examine" 00:10:05.100 } 00:10:05.100 ] 00:10:05.100 } 00:10:05.100 ] 00:10:05.100 } 00:10:05.100 [2024-07-14 21:09:16.524715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:05.100 [2024-07-14 21:09:16.524874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66086 ] 00:10:05.359 [2024-07-14 21:09:16.685262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.619 [2024-07-14 21:09:16.908089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.619 [2024-07-14 21:09:17.067529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.814  Copying: 65/65 [MB] (average 955 MBps) 00:10:06.814 00:10:06.814 21:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:06.814 21:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:10:06.814 21:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:06.814 21:09:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:06.814 { 00:10:06.814 "subsystems": [ 00:10:06.814 { 00:10:06.814 "subsystem": "bdev", 00:10:06.814 "config": [ 00:10:06.814 { 00:10:06.814 "params": { 00:10:06.814 "trtype": "pcie", 00:10:06.814 "traddr": "0000:00:10.0", 00:10:06.814 "name": "Nvme0" 00:10:06.814 }, 00:10:06.814 "method": "bdev_nvme_attach_controller" 00:10:06.814 }, 00:10:06.814 { 00:10:06.814 "params": { 00:10:06.814 "trtype": "pcie", 00:10:06.814 "traddr": "0000:00:11.0", 00:10:06.814 "name": "Nvme1" 00:10:06.814 }, 00:10:06.814 "method": "bdev_nvme_attach_controller" 00:10:06.814 }, 00:10:06.814 { 00:10:06.814 "method": "bdev_wait_for_examine" 00:10:06.814 } 00:10:06.814 ] 00:10:06.814 } 00:10:06.814 ] 00:10:06.814 } 00:10:06.814 [2024-07-14 21:09:18.299725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:06.814 [2024-07-14 21:09:18.299926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66113 ] 00:10:07.072 [2024-07-14 21:09:18.470626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.331 [2024-07-14 21:09:18.623336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.331 [2024-07-14 21:09:18.769351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:08.964  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:08.964 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:08.965 21:09:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:08.965 { 00:10:08.965 "subsystems": [ 00:10:08.965 { 00:10:08.965 "subsystem": "bdev", 00:10:08.965 "config": [ 00:10:08.965 { 00:10:08.965 "params": { 00:10:08.965 "trtype": "pcie", 00:10:08.965 "traddr": "0000:00:10.0", 00:10:08.965 "name": "Nvme0" 00:10:08.965 }, 00:10:08.965 "method": "bdev_nvme_attach_controller" 00:10:08.965 }, 00:10:08.965 { 00:10:08.965 "params": { 00:10:08.965 "trtype": "pcie", 00:10:08.965 "traddr": "0000:00:11.0", 00:10:08.965 "name": "Nvme1" 00:10:08.965 }, 00:10:08.965 "method": "bdev_nvme_attach_controller" 00:10:08.965 }, 00:10:08.965 { 00:10:08.965 "method": "bdev_wait_for_examine" 00:10:08.965 } 00:10:08.965 ] 00:10:08.965 } 00:10:08.965 ] 00:10:08.965 } 00:10:08.965 [2024-07-14 21:09:20.247212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:08.965 [2024-07-14 21:09:20.247690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66147 ] 00:10:08.965 [2024-07-14 21:09:20.421746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.223 [2024-07-14 21:09:20.649427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.480 [2024-07-14 21:09:20.871794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.710  Copying: 65/65 [MB] (average 1000 MBps) 00:10:10.710 00:10:10.710 21:09:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:10:10.710 21:09:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:10.710 21:09:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:10.710 21:09:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:10.710 { 00:10:10.710 "subsystems": [ 00:10:10.710 { 00:10:10.710 "subsystem": "bdev", 00:10:10.710 "config": [ 00:10:10.710 { 00:10:10.710 "params": { 00:10:10.710 "trtype": "pcie", 00:10:10.710 "traddr": "0000:00:10.0", 00:10:10.710 "name": "Nvme0" 00:10:10.710 }, 00:10:10.710 "method": "bdev_nvme_attach_controller" 00:10:10.710 }, 00:10:10.710 { 00:10:10.710 "params": { 00:10:10.711 "trtype": "pcie", 00:10:10.711 "traddr": "0000:00:11.0", 00:10:10.711 "name": "Nvme1" 00:10:10.711 }, 00:10:10.711 "method": "bdev_nvme_attach_controller" 00:10:10.711 }, 00:10:10.711 { 00:10:10.711 "method": "bdev_wait_for_examine" 00:10:10.711 } 00:10:10.711 ] 00:10:10.711 } 00:10:10.711 ] 00:10:10.711 } 00:10:10.711 [2024-07-14 21:09:22.143334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:10.711 [2024-07-14 21:09:22.143504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66180 ] 00:10:10.969 [2024-07-14 21:09:22.314498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.969 [2024-07-14 21:09:22.472580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.226 [2024-07-14 21:09:22.639128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:12.419  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:12.419 00:10:12.419 ************************************ 00:10:12.419 END TEST dd_offset_magic 00:10:12.419 ************************************ 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:12.419 00:10:12.419 real 0m7.420s 00:10:12.419 user 0m6.298s 00:10:12.419 sys 0m2.200s 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:12.419 21:09:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:12.677 { 00:10:12.677 "subsystems": [ 00:10:12.677 { 00:10:12.677 "subsystem": "bdev", 00:10:12.677 "config": [ 00:10:12.677 { 00:10:12.677 "params": { 00:10:12.677 "trtype": "pcie", 00:10:12.677 "traddr": "0000:00:10.0", 00:10:12.677 "name": "Nvme0" 00:10:12.677 }, 00:10:12.677 "method": "bdev_nvme_attach_controller" 00:10:12.677 }, 00:10:12.677 { 00:10:12.677 "params": { 00:10:12.677 "trtype": "pcie", 00:10:12.677 "traddr": "0000:00:11.0", 00:10:12.677 "name": "Nvme1" 00:10:12.677 }, 00:10:12.677 "method": "bdev_nvme_attach_controller" 00:10:12.677 }, 00:10:12.677 { 00:10:12.677 "method": "bdev_wait_for_examine" 00:10:12.677 } 00:10:12.677 ] 00:10:12.677 } 00:10:12.677 ] 00:10:12.677 } 00:10:12.677 [2024-07-14 21:09:23.986893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:12.677 [2024-07-14 21:09:23.987023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66229 ] 00:10:12.677 [2024-07-14 21:09:24.143340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.936 [2024-07-14 21:09:24.303957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.936 [2024-07-14 21:09:24.454146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:14.130  Copying: 5120/5120 [kB] (average 1250 MBps) 00:10:14.130 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:14.130 21:09:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:14.130 { 00:10:14.130 "subsystems": [ 00:10:14.130 { 00:10:14.130 "subsystem": "bdev", 00:10:14.130 "config": [ 00:10:14.130 { 00:10:14.130 "params": { 00:10:14.130 "trtype": "pcie", 00:10:14.130 "traddr": "0000:00:10.0", 00:10:14.130 "name": "Nvme0" 00:10:14.130 }, 00:10:14.130 "method": "bdev_nvme_attach_controller" 00:10:14.130 }, 00:10:14.130 { 00:10:14.130 "params": { 00:10:14.130 "trtype": "pcie", 00:10:14.130 "traddr": "0000:00:11.0", 00:10:14.130 "name": "Nvme1" 00:10:14.130 }, 00:10:14.130 "method": "bdev_nvme_attach_controller" 00:10:14.130 }, 00:10:14.130 { 00:10:14.130 "method": "bdev_wait_for_examine" 00:10:14.130 } 00:10:14.130 ] 00:10:14.130 } 00:10:14.130 ] 00:10:14.130 } 00:10:14.130 [2024-07-14 21:09:25.591593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:14.130 [2024-07-14 21:09:25.591765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66251 ] 00:10:14.389 [2024-07-14 21:09:25.744553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.389 [2024-07-14 21:09:25.894248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.648 [2024-07-14 21:09:26.049542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:15.845  Copying: 5120/5120 [kB] (average 714 MBps) 00:10:15.845 00:10:15.845 21:09:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:10:15.845 ************************************ 00:10:15.845 END TEST spdk_dd_bdev_to_bdev 00:10:15.845 ************************************ 00:10:15.845 00:10:15.845 real 0m16.212s 00:10:15.845 user 0m13.813s 00:10:15.845 sys 0m6.977s 00:10:15.845 21:09:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.845 21:09:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:15.845 21:09:27 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:15.845 21:09:27 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:10:15.845 21:09:27 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:15.845 21:09:27 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:15.845 21:09:27 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.845 21:09:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:15.845 ************************************ 00:10:15.845 START TEST spdk_dd_uring 00:10:15.845 ************************************ 00:10:15.845 21:09:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:16.106 * Looking for test storage... 00:10:16.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:16.106 ************************************ 00:10:16.106 START TEST dd_uring_copy 00:10:16.106 ************************************ 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=51muwpa89tnbdhuqs2llx53ywg8tswfw13o4gaw368ls65c39xcmafv3frcrke5nt3x5eku544u414zlw4b3aru2ukffquhkqj2aqybyor541xeiq1tfcud6hjaki80w4msqucckv81pl4n71ee1r2d2fgvew1e19n519x0vb7scvod6nss81hpzy30mlf85gsdlj1a0tun6utmztvxoa33wgmq962tesb2mz7qb34f3oli7nirpcpc9x488qefi5pun0dvl876l2415hsfwauqccgnqr5f0xi4p2vhda8mekkxp9e0mhgf7aslaks3ygedtytpggkrz2jfyu885rf7rj3t4etmxdtvegoop3gjpw9tt87mfd2ndcmfzd3bs0mr9lppytkiy6i6heok26d5znu8tg64oz3cvsnptm43crrtnl3zh2anhw9gdh2g4lc7oxrdir2h7szu957byqqnk880drqsdbbxm65b9cz57ywsdwlmxn9jjebu8nhs5lvily881iigrb2g943hf7i53r8piebfu9h5vaihkaixnufudho5ikwb6ofwaj2nnj1af6cy6dtl9odss8x6nz59nbeyjs72z881iy01qszv310gyiuf21sbytk9yzw3md9dlpit1ufgp5ht5shiv0f7l8493lekp23ilk5rz5583ds7y6nc0vdg2rs5qapehrnlv4flillhgi8wvkkvrayy0wb8qjhxtgyf53mz91olexhwsm4t81xxljw1feoc95bvpgwi6zbxj2a9knnb3d6x600ajnf9014k1k6fnhvgwiyu3d7czd1e4n1tank71r933wtzgn7y4n3c8y3g6g8tod12d9cofgii27dl9uijawk5aaohx67csflmpelbatrlrsv2khvu1wg4a07jiaok7nh6x3tfe6zv63907ztdvlfrkq4m5emd99e541gc2yqoyibc9everl8kmguwr8zrq35m5cv3o4416es01i70y1dl6drsfbfxxhtc2lnt2 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 51muwpa89tnbdhuqs2llx53ywg8tswfw13o4gaw368ls65c39xcmafv3frcrke5nt3x5eku544u414zlw4b3aru2ukffquhkqj2aqybyor541xeiq1tfcud6hjaki80w4msqucckv81pl4n71ee1r2d2fgvew1e19n519x0vb7scvod6nss81hpzy30mlf85gsdlj1a0tun6utmztvxoa33wgmq962tesb2mz7qb34f3oli7nirpcpc9x488qefi5pun0dvl876l2415hsfwauqccgnqr5f0xi4p2vhda8mekkxp9e0mhgf7aslaks3ygedtytpggkrz2jfyu885rf7rj3t4etmxdtvegoop3gjpw9tt87mfd2ndcmfzd3bs0mr9lppytkiy6i6heok26d5znu8tg64oz3cvsnptm43crrtnl3zh2anhw9gdh2g4lc7oxrdir2h7szu957byqqnk880drqsdbbxm65b9cz57ywsdwlmxn9jjebu8nhs5lvily881iigrb2g943hf7i53r8piebfu9h5vaihkaixnufudho5ikwb6ofwaj2nnj1af6cy6dtl9odss8x6nz59nbeyjs72z881iy01qszv310gyiuf21sbytk9yzw3md9dlpit1ufgp5ht5shiv0f7l8493lekp23ilk5rz5583ds7y6nc0vdg2rs5qapehrnlv4flillhgi8wvkkvrayy0wb8qjhxtgyf53mz91olexhwsm4t81xxljw1feoc95bvpgwi6zbxj2a9knnb3d6x600ajnf9014k1k6fnhvgwiyu3d7czd1e4n1tank71r933wtzgn7y4n3c8y3g6g8tod12d9cofgii27dl9uijawk5aaohx67csflmpelbatrlrsv2khvu1wg4a07jiaok7nh6x3tfe6zv63907ztdvlfrkq4m5emd99e541gc2yqoyibc9everl8kmguwr8zrq35m5cv3o4416es01i70y1dl6drsfbfxxhtc2lnt2 00:10:16.106 21:09:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:10:16.106 [2024-07-14 21:09:27.604346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:16.106 [2024-07-14 21:09:27.604513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66333 ] 00:10:16.366 [2024-07-14 21:09:27.774728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.625 [2024-07-14 21:09:27.960222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.625 [2024-07-14 21:09:28.102271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:19.497  Copying: 511/511 [MB] (average 2124 MBps) 00:10:19.497 00:10:19.497 21:09:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:10:19.497 21:09:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:10:19.497 21:09:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:19.497 21:09:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:19.497 { 00:10:19.497 "subsystems": [ 00:10:19.497 { 00:10:19.497 "subsystem": "bdev", 00:10:19.497 "config": [ 00:10:19.497 { 00:10:19.497 "params": { 00:10:19.497 "block_size": 512, 00:10:19.497 "num_blocks": 1048576, 00:10:19.497 "name": "malloc0" 00:10:19.497 }, 00:10:19.497 "method": "bdev_malloc_create" 00:10:19.497 }, 00:10:19.497 { 00:10:19.497 "params": { 00:10:19.497 "filename": "/dev/zram1", 00:10:19.497 "name": "uring0" 00:10:19.497 }, 00:10:19.497 "method": "bdev_uring_create" 00:10:19.497 }, 00:10:19.497 { 00:10:19.497 "method": "bdev_wait_for_examine" 00:10:19.497 } 00:10:19.497 ] 00:10:19.497 } 00:10:19.497 ] 00:10:19.497 } 00:10:19.497 [2024-07-14 21:09:30.811244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:19.497 [2024-07-14 21:09:30.811412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66375 ] 00:10:19.497 [2024-07-14 21:09:30.978696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.756 [2024-07-14 21:09:31.138670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.756 [2024-07-14 21:09:31.287492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:25.326  Copying: 217/512 [MB] (217 MBps) Copying: 415/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 204 MBps) 00:10:25.326 00:10:25.326 21:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:10:25.326 21:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:10:25.326 21:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:25.326 21:09:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:25.326 { 00:10:25.326 "subsystems": [ 00:10:25.326 { 00:10:25.326 "subsystem": "bdev", 00:10:25.326 "config": [ 00:10:25.326 { 00:10:25.326 "params": { 00:10:25.326 "block_size": 512, 00:10:25.326 "num_blocks": 1048576, 00:10:25.326 "name": "malloc0" 00:10:25.326 }, 00:10:25.326 "method": "bdev_malloc_create" 00:10:25.326 }, 00:10:25.326 { 00:10:25.326 "params": { 00:10:25.326 "filename": "/dev/zram1", 00:10:25.326 "name": "uring0" 00:10:25.326 }, 00:10:25.326 "method": "bdev_uring_create" 00:10:25.326 }, 00:10:25.326 { 00:10:25.326 "method": "bdev_wait_for_examine" 00:10:25.326 } 00:10:25.326 ] 00:10:25.326 } 00:10:25.326 ] 00:10:25.326 } 00:10:25.326 [2024-07-14 21:09:36.378909] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:25.326 [2024-07-14 21:09:36.379048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66463 ] 00:10:25.326 [2024-07-14 21:09:36.536270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.327 [2024-07-14 21:09:36.705585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.327 [2024-07-14 21:09:36.872525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:32.279  Copying: 137/512 [MB] (137 MBps) Copying: 274/512 [MB] (136 MBps) Copying: 410/512 [MB] (135 MBps) Copying: 512/512 [MB] (average 134 MBps) 00:10:32.279 00:10:32.279 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 51muwpa89tnbdhuqs2llx53ywg8tswfw13o4gaw368ls65c39xcmafv3frcrke5nt3x5eku544u414zlw4b3aru2ukffquhkqj2aqybyor541xeiq1tfcud6hjaki80w4msqucckv81pl4n71ee1r2d2fgvew1e19n519x0vb7scvod6nss81hpzy30mlf85gsdlj1a0tun6utmztvxoa33wgmq962tesb2mz7qb34f3oli7nirpcpc9x488qefi5pun0dvl876l2415hsfwauqccgnqr5f0xi4p2vhda8mekkxp9e0mhgf7aslaks3ygedtytpggkrz2jfyu885rf7rj3t4etmxdtvegoop3gjpw9tt87mfd2ndcmfzd3bs0mr9lppytkiy6i6heok26d5znu8tg64oz3cvsnptm43crrtnl3zh2anhw9gdh2g4lc7oxrdir2h7szu957byqqnk880drqsdbbxm65b9cz57ywsdwlmxn9jjebu8nhs5lvily881iigrb2g943hf7i53r8piebfu9h5vaihkaixnufudho5ikwb6ofwaj2nnj1af6cy6dtl9odss8x6nz59nbeyjs72z881iy01qszv310gyiuf21sbytk9yzw3md9dlpit1ufgp5ht5shiv0f7l8493lekp23ilk5rz5583ds7y6nc0vdg2rs5qapehrnlv4flillhgi8wvkkvrayy0wb8qjhxtgyf53mz91olexhwsm4t81xxljw1feoc95bvpgwi6zbxj2a9knnb3d6x600ajnf9014k1k6fnhvgwiyu3d7czd1e4n1tank71r933wtzgn7y4n3c8y3g6g8tod12d9cofgii27dl9uijawk5aaohx67csflmpelbatrlrsv2khvu1wg4a07jiaok7nh6x3tfe6zv63907ztdvlfrkq4m5emd99e541gc2yqoyibc9everl8kmguwr8zrq35m5cv3o4416es01i70y1dl6drsfbfxxhtc2lnt2 == \5\1\m\u\w\p\a\8\9\t\n\b\d\h\u\q\s\2\l\l\x\5\3\y\w\g\8\t\s\w\f\w\1\3\o\4\g\a\w\3\6\8\l\s\6\5\c\3\9\x\c\m\a\f\v\3\f\r\c\r\k\e\5\n\t\3\x\5\e\k\u\5\4\4\u\4\1\4\z\l\w\4\b\3\a\r\u\2\u\k\f\f\q\u\h\k\q\j\2\a\q\y\b\y\o\r\5\4\1\x\e\i\q\1\t\f\c\u\d\6\h\j\a\k\i\8\0\w\4\m\s\q\u\c\c\k\v\8\1\p\l\4\n\7\1\e\e\1\r\2\d\2\f\g\v\e\w\1\e\1\9\n\5\1\9\x\0\v\b\7\s\c\v\o\d\6\n\s\s\8\1\h\p\z\y\3\0\m\l\f\8\5\g\s\d\l\j\1\a\0\t\u\n\6\u\t\m\z\t\v\x\o\a\3\3\w\g\m\q\9\6\2\t\e\s\b\2\m\z\7\q\b\3\4\f\3\o\l\i\7\n\i\r\p\c\p\c\9\x\4\8\8\q\e\f\i\5\p\u\n\0\d\v\l\8\7\6\l\2\4\1\5\h\s\f\w\a\u\q\c\c\g\n\q\r\5\f\0\x\i\4\p\2\v\h\d\a\8\m\e\k\k\x\p\9\e\0\m\h\g\f\7\a\s\l\a\k\s\3\y\g\e\d\t\y\t\p\g\g\k\r\z\2\j\f\y\u\8\8\5\r\f\7\r\j\3\t\4\e\t\m\x\d\t\v\e\g\o\o\p\3\g\j\p\w\9\t\t\8\7\m\f\d\2\n\d\c\m\f\z\d\3\b\s\0\m\r\9\l\p\p\y\t\k\i\y\6\i\6\h\e\o\k\2\6\d\5\z\n\u\8\t\g\6\4\o\z\3\c\v\s\n\p\t\m\4\3\c\r\r\t\n\l\3\z\h\2\a\n\h\w\9\g\d\h\2\g\4\l\c\7\o\x\r\d\i\r\2\h\7\s\z\u\9\5\7\b\y\q\q\n\k\8\8\0\d\r\q\s\d\b\b\x\m\6\5\b\9\c\z\5\7\y\w\s\d\w\l\m\x\n\9\j\j\e\b\u\8\n\h\s\5\l\v\i\l\y\8\8\1\i\i\g\r\b\2\g\9\4\3\h\f\7\i\5\3\r\8\p\i\e\b\f\u\9\h\5\v\a\i\h\k\a\i\x\n\u\f\u\d\h\o\5\i\k\w\b\6\o\f\w\a\j\2\n\n\j\1\a\f\6\c\y\6\d\t\l\9\o\d\s\s\8\x\6\n\z\5\9\n\b\e\y\j\s\7\2\z\8\8\1\i\y\0\1\q\s\z\v\3\1\0\g\y\i\u\f\2\1\s\b\y\t\k\9\y\z\w\3\m\d\9\d\l\p\i\t\1\u\f\g\p\5\h\t\5\s\h\i\v\0\f\7\l\8\4\9\3\l\e\k\p\2\3\i\l\k\5\r\z\5\5\8\3\d\s\7\y\6\n\c\0\v\d\g\2\r\s\5\q\a\p\e\h\r\n\l\v\4\f\l\i\l\l\h\g\i\8\w\v\k\k\v\r\a\y\y\0\w\b\8\q\j\h\x\t\g\y\f\5\3\m\z\9\1\o\l\e\x\h\w\s\m\4\t\8\1\x\x\l\j\w\1\f\e\o\c\9\5\b\v\p\g\w\i\6\z\b\x\j\2\a\9\k\n\n\b\3\d\6\x\6\0\0\a\j\n\f\9\0\1\4\k\1\k\6\f\n\h\v\g\w\i\y\u\3\d\7\c\z\d\1\e\4\n\1\t\a\n\k\7\1\r\9\3\3\w\t\z\g\n\7\y\4\n\3\c\8\y\3\g\6\g\8\t\o\d\1\2\d\9\c\o\f\g\i\i\2\7\d\l\9\u\i\j\a\w\k\5\a\a\o\h\x\6\7\c\s\f\l\m\p\e\l\b\a\t\r\l\r\s\v\2\k\h\v\u\1\w\g\4\a\0\7\j\i\a\o\k\7\n\h\6\x\3\t\f\e\6\z\v\6\3\9\0\7\z\t\d\v\l\f\r\k\q\4\m\5\e\m\d\9\9\e\5\4\1\g\c\2\y\q\o\y\i\b\c\9\e\v\e\r\l\8\k\m\g\u\w\r\8\z\r\q\3\5\m\5\c\v\3\o\4\4\1\6\e\s\0\1\i\7\0\y\1\d\l\6\d\r\s\f\b\f\x\x\h\t\c\2\l\n\t\2 ]] 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 51muwpa89tnbdhuqs2llx53ywg8tswfw13o4gaw368ls65c39xcmafv3frcrke5nt3x5eku544u414zlw4b3aru2ukffquhkqj2aqybyor541xeiq1tfcud6hjaki80w4msqucckv81pl4n71ee1r2d2fgvew1e19n519x0vb7scvod6nss81hpzy30mlf85gsdlj1a0tun6utmztvxoa33wgmq962tesb2mz7qb34f3oli7nirpcpc9x488qefi5pun0dvl876l2415hsfwauqccgnqr5f0xi4p2vhda8mekkxp9e0mhgf7aslaks3ygedtytpggkrz2jfyu885rf7rj3t4etmxdtvegoop3gjpw9tt87mfd2ndcmfzd3bs0mr9lppytkiy6i6heok26d5znu8tg64oz3cvsnptm43crrtnl3zh2anhw9gdh2g4lc7oxrdir2h7szu957byqqnk880drqsdbbxm65b9cz57ywsdwlmxn9jjebu8nhs5lvily881iigrb2g943hf7i53r8piebfu9h5vaihkaixnufudho5ikwb6ofwaj2nnj1af6cy6dtl9odss8x6nz59nbeyjs72z881iy01qszv310gyiuf21sbytk9yzw3md9dlpit1ufgp5ht5shiv0f7l8493lekp23ilk5rz5583ds7y6nc0vdg2rs5qapehrnlv4flillhgi8wvkkvrayy0wb8qjhxtgyf53mz91olexhwsm4t81xxljw1feoc95bvpgwi6zbxj2a9knnb3d6x600ajnf9014k1k6fnhvgwiyu3d7czd1e4n1tank71r933wtzgn7y4n3c8y3g6g8tod12d9cofgii27dl9uijawk5aaohx67csflmpelbatrlrsv2khvu1wg4a07jiaok7nh6x3tfe6zv63907ztdvlfrkq4m5emd99e541gc2yqoyibc9everl8kmguwr8zrq35m5cv3o4416es01i70y1dl6drsfbfxxhtc2lnt2 == \5\1\m\u\w\p\a\8\9\t\n\b\d\h\u\q\s\2\l\l\x\5\3\y\w\g\8\t\s\w\f\w\1\3\o\4\g\a\w\3\6\8\l\s\6\5\c\3\9\x\c\m\a\f\v\3\f\r\c\r\k\e\5\n\t\3\x\5\e\k\u\5\4\4\u\4\1\4\z\l\w\4\b\3\a\r\u\2\u\k\f\f\q\u\h\k\q\j\2\a\q\y\b\y\o\r\5\4\1\x\e\i\q\1\t\f\c\u\d\6\h\j\a\k\i\8\0\w\4\m\s\q\u\c\c\k\v\8\1\p\l\4\n\7\1\e\e\1\r\2\d\2\f\g\v\e\w\1\e\1\9\n\5\1\9\x\0\v\b\7\s\c\v\o\d\6\n\s\s\8\1\h\p\z\y\3\0\m\l\f\8\5\g\s\d\l\j\1\a\0\t\u\n\6\u\t\m\z\t\v\x\o\a\3\3\w\g\m\q\9\6\2\t\e\s\b\2\m\z\7\q\b\3\4\f\3\o\l\i\7\n\i\r\p\c\p\c\9\x\4\8\8\q\e\f\i\5\p\u\n\0\d\v\l\8\7\6\l\2\4\1\5\h\s\f\w\a\u\q\c\c\g\n\q\r\5\f\0\x\i\4\p\2\v\h\d\a\8\m\e\k\k\x\p\9\e\0\m\h\g\f\7\a\s\l\a\k\s\3\y\g\e\d\t\y\t\p\g\g\k\r\z\2\j\f\y\u\8\8\5\r\f\7\r\j\3\t\4\e\t\m\x\d\t\v\e\g\o\o\p\3\g\j\p\w\9\t\t\8\7\m\f\d\2\n\d\c\m\f\z\d\3\b\s\0\m\r\9\l\p\p\y\t\k\i\y\6\i\6\h\e\o\k\2\6\d\5\z\n\u\8\t\g\6\4\o\z\3\c\v\s\n\p\t\m\4\3\c\r\r\t\n\l\3\z\h\2\a\n\h\w\9\g\d\h\2\g\4\l\c\7\o\x\r\d\i\r\2\h\7\s\z\u\9\5\7\b\y\q\q\n\k\8\8\0\d\r\q\s\d\b\b\x\m\6\5\b\9\c\z\5\7\y\w\s\d\w\l\m\x\n\9\j\j\e\b\u\8\n\h\s\5\l\v\i\l\y\8\8\1\i\i\g\r\b\2\g\9\4\3\h\f\7\i\5\3\r\8\p\i\e\b\f\u\9\h\5\v\a\i\h\k\a\i\x\n\u\f\u\d\h\o\5\i\k\w\b\6\o\f\w\a\j\2\n\n\j\1\a\f\6\c\y\6\d\t\l\9\o\d\s\s\8\x\6\n\z\5\9\n\b\e\y\j\s\7\2\z\8\8\1\i\y\0\1\q\s\z\v\3\1\0\g\y\i\u\f\2\1\s\b\y\t\k\9\y\z\w\3\m\d\9\d\l\p\i\t\1\u\f\g\p\5\h\t\5\s\h\i\v\0\f\7\l\8\4\9\3\l\e\k\p\2\3\i\l\k\5\r\z\5\5\8\3\d\s\7\y\6\n\c\0\v\d\g\2\r\s\5\q\a\p\e\h\r\n\l\v\4\f\l\i\l\l\h\g\i\8\w\v\k\k\v\r\a\y\y\0\w\b\8\q\j\h\x\t\g\y\f\5\3\m\z\9\1\o\l\e\x\h\w\s\m\4\t\8\1\x\x\l\j\w\1\f\e\o\c\9\5\b\v\p\g\w\i\6\z\b\x\j\2\a\9\k\n\n\b\3\d\6\x\6\0\0\a\j\n\f\9\0\1\4\k\1\k\6\f\n\h\v\g\w\i\y\u\3\d\7\c\z\d\1\e\4\n\1\t\a\n\k\7\1\r\9\3\3\w\t\z\g\n\7\y\4\n\3\c\8\y\3\g\6\g\8\t\o\d\1\2\d\9\c\o\f\g\i\i\2\7\d\l\9\u\i\j\a\w\k\5\a\a\o\h\x\6\7\c\s\f\l\m\p\e\l\b\a\t\r\l\r\s\v\2\k\h\v\u\1\w\g\4\a\0\7\j\i\a\o\k\7\n\h\6\x\3\t\f\e\6\z\v\6\3\9\0\7\z\t\d\v\l\f\r\k\q\4\m\5\e\m\d\9\9\e\5\4\1\g\c\2\y\q\o\y\i\b\c\9\e\v\e\r\l\8\k\m\g\u\w\r\8\z\r\q\3\5\m\5\c\v\3\o\4\4\1\6\e\s\0\1\i\7\0\y\1\d\l\6\d\r\s\f\b\f\x\x\h\t\c\2\l\n\t\2 ]] 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:32.280 21:09:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:32.280 { 00:10:32.280 "subsystems": [ 00:10:32.280 { 00:10:32.280 "subsystem": "bdev", 00:10:32.280 "config": [ 00:10:32.280 { 00:10:32.280 "params": { 00:10:32.280 "block_size": 512, 00:10:32.280 "num_blocks": 1048576, 00:10:32.280 "name": "malloc0" 00:10:32.280 }, 00:10:32.280 "method": "bdev_malloc_create" 00:10:32.280 }, 00:10:32.280 { 00:10:32.280 "params": { 00:10:32.280 "filename": "/dev/zram1", 00:10:32.280 "name": "uring0" 00:10:32.280 }, 00:10:32.280 "method": "bdev_uring_create" 00:10:32.280 }, 00:10:32.280 { 00:10:32.280 "method": "bdev_wait_for_examine" 00:10:32.280 } 00:10:32.280 ] 00:10:32.280 } 00:10:32.280 ] 00:10:32.280 } 00:10:32.538 [2024-07-14 21:09:43.846889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:32.538 [2024-07-14 21:09:43.847129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66557 ] 00:10:32.538 [2024-07-14 21:09:44.012787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.797 [2024-07-14 21:09:44.213335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.055 [2024-07-14 21:09:44.398655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:40.210  Copying: 115/512 [MB] (115 MBps) Copying: 230/512 [MB] (114 MBps) Copying: 348/512 [MB] (118 MBps) Copying: 480/512 [MB] (131 MBps) Copying: 512/512 [MB] (average 120 MBps) 00:10:40.210 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:40.210 21:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:40.210 { 00:10:40.210 "subsystems": [ 00:10:40.210 { 00:10:40.210 "subsystem": "bdev", 00:10:40.210 "config": [ 00:10:40.210 { 00:10:40.210 "params": { 00:10:40.210 "block_size": 512, 00:10:40.210 "num_blocks": 1048576, 00:10:40.210 "name": "malloc0" 00:10:40.210 }, 00:10:40.210 "method": "bdev_malloc_create" 00:10:40.210 }, 00:10:40.210 { 00:10:40.210 "params": { 00:10:40.210 "filename": "/dev/zram1", 00:10:40.210 "name": "uring0" 00:10:40.210 }, 00:10:40.210 "method": "bdev_uring_create" 00:10:40.210 }, 00:10:40.210 { 00:10:40.210 "params": { 00:10:40.210 "name": "uring0" 00:10:40.210 }, 00:10:40.210 "method": "bdev_uring_delete" 00:10:40.210 }, 00:10:40.210 { 00:10:40.210 "method": "bdev_wait_for_examine" 00:10:40.210 } 00:10:40.210 ] 00:10:40.210 } 00:10:40.210 ] 00:10:40.210 } 00:10:40.210 [2024-07-14 21:09:51.339466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:40.211 [2024-07-14 21:09:51.339653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66652 ] 00:10:40.211 [2024-07-14 21:09:51.510111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.211 [2024-07-14 21:09:51.675309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.468 [2024-07-14 21:09:51.837009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:42.937  Copying: 0/0 [B] (average 0 Bps) 00:10:42.937 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:42.937 21:09:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:42.937 { 00:10:42.937 "subsystems": [ 00:10:42.937 { 00:10:42.937 "subsystem": "bdev", 00:10:42.937 "config": [ 00:10:42.937 { 00:10:42.937 "params": { 00:10:42.937 "block_size": 512, 00:10:42.937 "num_blocks": 1048576, 00:10:42.937 "name": "malloc0" 00:10:42.937 }, 00:10:42.937 "method": "bdev_malloc_create" 00:10:42.937 }, 00:10:42.937 { 00:10:42.937 "params": { 00:10:42.937 "filename": "/dev/zram1", 00:10:42.937 "name": "uring0" 00:10:42.937 }, 00:10:42.937 "method": "bdev_uring_create" 00:10:42.937 }, 00:10:42.937 { 00:10:42.937 "params": { 00:10:42.937 "name": "uring0" 00:10:42.937 }, 00:10:42.937 "method": "bdev_uring_delete" 00:10:42.937 }, 00:10:42.937 { 00:10:42.937 "method": "bdev_wait_for_examine" 00:10:42.937 } 00:10:42.937 ] 00:10:42.937 } 00:10:42.937 ] 00:10:42.937 } 00:10:42.937 [2024-07-14 21:09:54.474744] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:42.937 [2024-07-14 21:09:54.474944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66700 ] 00:10:43.196 [2024-07-14 21:09:54.642983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.454 [2024-07-14 21:09:54.809454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.454 [2024-07-14 21:09:54.963393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:44.022 [2024-07-14 21:09:55.489594] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:10:44.022 [2024-07-14 21:09:55.489682] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:10:44.022 [2024-07-14 21:09:55.489702] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:10:44.022 [2024-07-14 21:09:55.489719] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:45.920 [2024-07-14 21:09:57.127523] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:46.196 00:10:46.196 real 0m30.267s 00:10:46.196 user 0m24.849s 00:10:46.196 sys 0m15.695s 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.196 21:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:46.196 ************************************ 00:10:46.196 END TEST dd_uring_copy 00:10:46.196 ************************************ 00:10:46.454 21:09:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:10:46.454 00:10:46.454 real 0m30.406s 00:10:46.454 user 0m24.903s 00:10:46.454 sys 0m15.776s 00:10:46.454 21:09:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.454 ************************************ 00:10:46.454 END TEST spdk_dd_uring 00:10:46.454 ************************************ 00:10:46.454 21:09:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:46.454 21:09:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:46.454 21:09:57 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:46.454 21:09:57 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:46.454 21:09:57 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.454 21:09:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:46.454 ************************************ 00:10:46.454 START TEST spdk_dd_sparse 00:10:46.454 ************************************ 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:46.454 * Looking for test storage... 00:10:46.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:10:46.454 1+0 records in 00:10:46.454 1+0 records out 00:10:46.454 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00495486 s, 847 MB/s 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:10:46.454 1+0 records in 00:10:46.454 1+0 records out 00:10:46.454 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00650089 s, 645 MB/s 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:10:46.454 1+0 records in 00:10:46.454 1+0 records out 00:10:46.454 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00617775 s, 679 MB/s 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:46.454 ************************************ 00:10:46.454 START TEST dd_sparse_file_to_file 00:10:46.454 ************************************ 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:46.454 21:09:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:46.712 { 00:10:46.712 "subsystems": [ 00:10:46.712 { 00:10:46.712 "subsystem": "bdev", 00:10:46.712 "config": [ 00:10:46.712 { 00:10:46.712 "params": { 00:10:46.712 "block_size": 4096, 00:10:46.712 "filename": "dd_sparse_aio_disk", 00:10:46.712 "name": "dd_aio" 00:10:46.712 }, 00:10:46.712 "method": "bdev_aio_create" 00:10:46.712 }, 00:10:46.712 { 00:10:46.712 "params": { 00:10:46.712 "lvs_name": "dd_lvstore", 00:10:46.712 "bdev_name": "dd_aio" 00:10:46.712 }, 00:10:46.712 "method": "bdev_lvol_create_lvstore" 00:10:46.712 }, 00:10:46.712 { 00:10:46.712 "method": "bdev_wait_for_examine" 00:10:46.712 } 00:10:46.712 ] 00:10:46.712 } 00:10:46.712 ] 00:10:46.712 } 00:10:46.712 [2024-07-14 21:09:58.043860] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:46.712 [2024-07-14 21:09:58.044212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66814 ] 00:10:46.712 [2024-07-14 21:09:58.197226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.970 [2024-07-14 21:09:58.365837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.228 [2024-07-14 21:09:58.533719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:48.603  Copying: 12/36 [MB] (average 1090 MBps) 00:10:48.603 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:48.603 ************************************ 00:10:48.603 END TEST dd_sparse_file_to_file 00:10:48.603 ************************************ 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:48.603 00:10:48.603 real 0m1.838s 00:10:48.603 user 0m1.529s 00:10:48.603 sys 0m0.895s 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:48.603 ************************************ 00:10:48.603 START TEST dd_sparse_file_to_bdev 00:10:48.603 ************************************ 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:48.603 21:09:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:48.603 { 00:10:48.603 "subsystems": [ 00:10:48.603 { 00:10:48.603 "subsystem": "bdev", 00:10:48.603 "config": [ 00:10:48.603 { 00:10:48.603 "params": { 00:10:48.603 "block_size": 4096, 00:10:48.603 "filename": "dd_sparse_aio_disk", 00:10:48.603 "name": "dd_aio" 00:10:48.603 }, 00:10:48.603 "method": "bdev_aio_create" 00:10:48.603 }, 00:10:48.603 { 00:10:48.603 "params": { 00:10:48.603 "lvs_name": "dd_lvstore", 00:10:48.603 "lvol_name": "dd_lvol", 00:10:48.603 "size_in_mib": 36, 00:10:48.603 "thin_provision": true 00:10:48.603 }, 00:10:48.603 "method": "bdev_lvol_create" 00:10:48.603 }, 00:10:48.603 { 00:10:48.603 "method": "bdev_wait_for_examine" 00:10:48.603 } 00:10:48.603 ] 00:10:48.603 } 00:10:48.603 ] 00:10:48.603 } 00:10:48.603 [2024-07-14 21:09:59.943500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:48.603 [2024-07-14 21:09:59.943690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66869 ] 00:10:48.603 [2024-07-14 21:10:00.114422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.867 [2024-07-14 21:10:00.273945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.164 [2024-07-14 21:10:00.440613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:50.107  Copying: 12/36 [MB] (average 545 MBps) 00:10:50.107 00:10:50.366 ************************************ 00:10:50.366 END TEST dd_sparse_file_to_bdev 00:10:50.366 ************************************ 00:10:50.366 00:10:50.366 real 0m1.818s 00:10:50.366 user 0m1.549s 00:10:50.366 sys 0m0.876s 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:50.366 ************************************ 00:10:50.366 START TEST dd_sparse_bdev_to_file 00:10:50.366 ************************************ 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:50.366 21:10:01 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:50.366 { 00:10:50.366 "subsystems": [ 00:10:50.366 { 00:10:50.366 "subsystem": "bdev", 00:10:50.366 "config": [ 00:10:50.366 { 00:10:50.366 "params": { 00:10:50.366 "block_size": 4096, 00:10:50.366 "filename": "dd_sparse_aio_disk", 00:10:50.366 "name": "dd_aio" 00:10:50.366 }, 00:10:50.366 "method": "bdev_aio_create" 00:10:50.366 }, 00:10:50.366 { 00:10:50.366 "method": "bdev_wait_for_examine" 00:10:50.366 } 00:10:50.366 ] 00:10:50.366 } 00:10:50.366 ] 00:10:50.366 } 00:10:50.366 [2024-07-14 21:10:01.802395] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:50.366 [2024-07-14 21:10:01.802536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66920 ] 00:10:50.626 [2024-07-14 21:10:01.957419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.626 [2024-07-14 21:10:02.121824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.885 [2024-07-14 21:10:02.287403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.280  Copying: 12/36 [MB] (average 1200 MBps) 00:10:52.280 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:52.280 00:10:52.280 real 0m1.817s 00:10:52.280 user 0m1.525s 00:10:52.280 sys 0m0.885s 00:10:52.280 ************************************ 00:10:52.280 END TEST dd_sparse_bdev_to_file 00:10:52.280 ************************************ 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:52.280 21:10:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:52.280 ************************************ 00:10:52.280 END TEST spdk_dd_sparse 00:10:52.280 ************************************ 00:10:52.280 00:10:52.280 real 0m5.771s 00:10:52.280 user 0m4.692s 00:10:52.281 sys 0m2.847s 00:10:52.281 21:10:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.281 21:10:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:52.281 21:10:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:52.281 21:10:03 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:52.281 21:10:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.281 21:10:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.281 21:10:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:52.281 ************************************ 00:10:52.281 START TEST spdk_dd_negative 00:10:52.281 ************************************ 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:52.281 * Looking for test storage... 00:10:52.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:52.281 ************************************ 00:10:52.281 START TEST dd_invalid_arguments 00:10:52.281 ************************************ 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:52.281 21:10:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:52.539 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:52.539 00:10:52.539 CPU options: 00:10:52.539 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:52.539 (like [0,1,10]) 00:10:52.539 --lcores lcore to CPU mapping list. The list is in the format: 00:10:52.539 [<,lcores[@CPUs]>...] 00:10:52.539 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:52.539 Within the group, '-' is used for range separator, 00:10:52.539 ',' is used for single number separator. 00:10:52.539 '( )' can be omitted for single element group, 00:10:52.539 '@' can be omitted if cpus and lcores have the same value 00:10:52.539 --disable-cpumask-locks Disable CPU core lock files. 00:10:52.539 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:52.539 pollers in the app support interrupt mode) 00:10:52.539 -p, --main-core main (primary) core for DPDK 00:10:52.539 00:10:52.539 Configuration options: 00:10:52.539 -c, --config, --json JSON config file 00:10:52.539 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:52.539 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:52.539 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:52.539 --rpcs-allowed comma-separated list of permitted RPCS 00:10:52.539 --json-ignore-init-errors don't exit on invalid config entry 00:10:52.539 00:10:52.539 Memory options: 00:10:52.539 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:52.539 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:52.539 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:52.539 -R, --huge-unlink unlink huge files after initialization 00:10:52.539 -n, --mem-channels number of memory channels used for DPDK 00:10:52.539 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:52.539 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:52.539 --no-huge run without using hugepages 00:10:52.539 -i, --shm-id shared memory ID (optional) 00:10:52.539 -g, --single-file-segments force creating just one hugetlbfs file 00:10:52.539 00:10:52.539 PCI options: 00:10:52.539 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:52.539 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:52.539 -u, --no-pci disable PCI access 00:10:52.539 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:52.539 00:10:52.539 Log options: 00:10:52.539 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:52.539 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:52.539 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:52.539 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:52.539 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:10:52.539 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:10:52.539 nvme_auth, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, 00:10:52.539 sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, 00:10:52.539 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:10:52.539 vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, 00:10:52.539 vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, 00:10:52.539 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:10:52.539 --silence-noticelog disable notice level logging to stderr 00:10:52.539 00:10:52.539 Trace options: 00:10:52.539 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:52.539 setting 0 to disable trace (default 32768) 00:10:52.539 Tracepoints vary in size and can use more than one trace entry. 00:10:52.539 -e, --tpoint-group [: 128 )) 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.798 00:10:52.798 real 0m0.137s 00:10:52.798 user 0m0.073s 00:10:52.798 sys 0m0.062s 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 ************************************ 00:10:52.798 END TEST dd_double_input 00:10:52.798 ************************************ 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 ************************************ 00:10:52.798 START TEST dd_double_output 00:10:52.798 ************************************ 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:52.798 [2024-07-14 21:10:04.218365] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.798 00:10:52.798 real 0m0.131s 00:10:52.798 user 0m0.077s 00:10:52.798 sys 0m0.052s 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.798 ************************************ 00:10:52.798 END TEST dd_double_output 00:10:52.798 ************************************ 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:52.798 ************************************ 00:10:52.798 START TEST dd_no_input 00:10:52.798 ************************************ 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.798 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:52.799 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:53.057 [2024-07-14 21:10:04.410188] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.057 00:10:53.057 real 0m0.142s 00:10:53.057 user 0m0.080s 00:10:53.057 sys 0m0.060s 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.057 ************************************ 00:10:53.057 END TEST dd_no_input 00:10:53.057 ************************************ 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:53.057 ************************************ 00:10:53.057 START TEST dd_no_output 00:10:53.057 ************************************ 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:53.057 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:53.317 [2024-07-14 21:10:04.613727] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.317 ************************************ 00:10:53.317 END TEST dd_no_output 00:10:53.317 ************************************ 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.317 00:10:53.317 real 0m0.161s 00:10:53.317 user 0m0.097s 00:10:53.317 sys 0m0.062s 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:53.317 ************************************ 00:10:53.317 START TEST dd_wrong_blocksize 00:10:53.317 ************************************ 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:53.317 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:53.317 [2024-07-14 21:10:04.822043] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:53.576 ************************************ 00:10:53.576 END TEST dd_wrong_blocksize 00:10:53.576 ************************************ 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.576 00:10:53.576 real 0m0.157s 00:10:53.576 user 0m0.088s 00:10:53.576 sys 0m0.067s 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 ************************************ 00:10:53.576 START TEST dd_smaller_blocksize 00:10:53.576 ************************************ 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:53.576 21:10:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:53.576 [2024-07-14 21:10:05.036021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:53.576 [2024-07-14 21:10:05.036191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67162 ] 00:10:53.834 [2024-07-14 21:10:05.206263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.834 [2024-07-14 21:10:05.364361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.092 [2024-07-14 21:10:05.530575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:54.351 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:54.351 [2024-07-14 21:10:05.888625] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:54.351 [2024-07-14 21:10:05.888732] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:55.286 [2024-07-14 21:10:06.489275] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.545 00:10:55.545 real 0m1.939s 00:10:55.545 user 0m1.453s 00:10:55.545 sys 0m0.374s 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.545 ************************************ 00:10:55.545 END TEST dd_smaller_blocksize 00:10:55.545 ************************************ 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:55.545 ************************************ 00:10:55.545 START TEST dd_invalid_count 00:10:55.545 ************************************ 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:55.545 21:10:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:55.545 [2024-07-14 21:10:07.023442] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:10:55.545 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:10:55.545 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.545 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:55.545 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.545 00:10:55.545 real 0m0.158s 00:10:55.545 user 0m0.098s 00:10:55.545 sys 0m0.059s 00:10:55.545 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.545 ************************************ 00:10:55.545 END TEST dd_invalid_count 00:10:55.545 ************************************ 00:10:55.545 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 ************************************ 00:10:55.804 START TEST dd_invalid_oflag 00:10:55.804 ************************************ 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:55.804 [2024-07-14 21:10:07.236004] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:55.804 00:10:55.804 real 0m0.160s 00:10:55.804 user 0m0.084s 00:10:55.804 sys 0m0.074s 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.804 ************************************ 00:10:55.804 END TEST dd_invalid_oflag 00:10:55.804 ************************************ 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 ************************************ 00:10:55.804 START TEST dd_invalid_iflag 00:10:55.804 ************************************ 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:55.804 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:56.063 [2024-07-14 21:10:07.425655] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:56.063 00:10:56.063 real 0m0.128s 00:10:56.063 user 0m0.070s 00:10:56.063 sys 0m0.057s 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.063 ************************************ 00:10:56.063 END TEST dd_invalid_iflag 00:10:56.063 ************************************ 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:56.063 ************************************ 00:10:56.063 START TEST dd_unknown_flag 00:10:56.063 ************************************ 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:56.063 21:10:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:56.322 [2024-07-14 21:10:07.622241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:56.322 [2024-07-14 21:10:07.622406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67274 ] 00:10:56.322 [2024-07-14 21:10:07.790531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.580 [2024-07-14 21:10:07.949007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.580 [2024-07-14 21:10:08.108844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:56.839 [2024-07-14 21:10:08.188563] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:56.839 [2024-07-14 21:10:08.188649] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:56.839 [2024-07-14 21:10:08.188725] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:56.839 [2024-07-14 21:10:08.188745] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:56.839 [2024-07-14 21:10:08.189080] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:56.839 [2024-07-14 21:10:08.189104] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:56.839 [2024-07-14 21:10:08.189202] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:56.839 [2024-07-14 21:10:08.189219] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:57.406 [2024-07-14 21:10:08.794879] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:10:57.665 ************************************ 00:10:57.665 END TEST dd_unknown_flag 00:10:57.665 ************************************ 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:57.665 00:10:57.665 real 0m1.663s 00:10:57.665 user 0m1.366s 00:10:57.665 sys 0m0.196s 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.665 21:10:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:57.923 ************************************ 00:10:57.923 START TEST dd_invalid_json 00:10:57.923 ************************************ 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:57.923 21:10:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:57.923 [2024-07-14 21:10:09.344962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:57.923 [2024-07-14 21:10:09.345131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67315 ] 00:10:58.182 [2024-07-14 21:10:09.514241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.182 [2024-07-14 21:10:09.678997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.182 [2024-07-14 21:10:09.679099] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:58.182 [2024-07-14 21:10:09.679128] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:58.182 [2024-07-14 21:10:09.679142] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:58.182 [2024-07-14 21:10:09.679212] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:58.748 00:10:58.748 real 0m0.819s 00:10:58.748 user 0m0.584s 00:10:58.748 sys 0m0.129s 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 ************************************ 00:10:58.748 END TEST dd_invalid_json 00:10:58.748 ************************************ 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:58.748 ************************************ 00:10:58.748 END TEST spdk_dd_negative 00:10:58.748 ************************************ 00:10:58.748 00:10:58.748 real 0m6.452s 00:10:58.748 user 0m4.394s 00:10:58.748 sys 0m1.675s 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.748 21:10:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 21:10:10 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:58.748 00:10:58.748 real 2m52.607s 00:10:58.748 user 2m21.369s 00:10:58.748 sys 0m58.718s 00:10:58.748 ************************************ 00:10:58.748 END TEST spdk_dd 00:10:58.748 ************************************ 00:10:58.748 21:10:10 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.748 21:10:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 21:10:10 -- common/autotest_common.sh@1142 -- # return 0 00:10:58.748 21:10:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:58.748 21:10:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.748 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 21:10:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:10:58.748 21:10:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:10:58.748 21:10:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:58.748 21:10:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.748 21:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.748 21:10:10 -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 ************************************ 00:10:58.748 START TEST nvmf_tcp 00:10:58.748 ************************************ 00:10:58.748 21:10:10 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:59.006 * Looking for test storage... 00:10:59.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.006 21:10:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.006 21:10:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.006 21:10:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.006 21:10:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.006 21:10:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.006 21:10:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.006 21:10:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:10:59.006 21:10:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.006 21:10:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:59.007 21:10:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.007 21:10:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:10:59.007 21:10:10 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.007 21:10:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.007 21:10:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.007 21:10:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.007 ************************************ 00:10:59.007 START TEST nvmf_host_management 00:10:59.007 ************************************ 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.007 * Looking for test storage... 00:10:59.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:59.007 Cannot find device "nvmf_init_br" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:59.007 Cannot find device "nvmf_tgt_br" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.007 Cannot find device "nvmf_tgt_br2" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:59.007 Cannot find device "nvmf_init_br" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:59.007 Cannot find device "nvmf_tgt_br" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:59.007 Cannot find device "nvmf_tgt_br2" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:59.007 Cannot find device "nvmf_br" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:59.007 Cannot find device "nvmf_init_if" 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:59.007 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:59.266 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:59.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:59.525 00:10:59.525 --- 10.0.0.2 ping statistics --- 00:10:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.525 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:59.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:59.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:59.525 00:10:59.525 --- 10.0.0.3 ping statistics --- 00:10:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.525 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:59.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:59.525 00:10:59.525 --- 10.0.0.1 ping statistics --- 00:10:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.525 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:59.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67579 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67579 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 67579 ']' 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.525 21:10:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:59.525 [2024-07-14 21:10:10.971159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:59.526 [2024-07-14 21:10:10.971330] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.785 [2024-07-14 21:10:11.147434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.045 [2024-07-14 21:10:11.380964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.045 [2024-07-14 21:10:11.381041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.045 [2024-07-14 21:10:11.381063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.045 [2024-07-14 21:10:11.381082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.045 [2024-07-14 21:10:11.381100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.045 [2024-07-14 21:10:11.381338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.045 [2024-07-14 21:10:11.382081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.045 [2024-07-14 21:10:11.382194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.045 [2024-07-14 21:10:11.382197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.045 [2024-07-14 21:10:11.554854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 [2024-07-14 21:10:11.935939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.612 21:10:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 Malloc0 00:11:00.612 [2024-07-14 21:10:12.053498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67637 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67637 /var/tmp/bdevperf.sock 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 67637 ']' 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:00.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.612 { 00:11:00.612 "params": { 00:11:00.612 "name": "Nvme$subsystem", 00:11:00.612 "trtype": "$TEST_TRANSPORT", 00:11:00.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.612 "adrfam": "ipv4", 00:11:00.612 "trsvcid": "$NVMF_PORT", 00:11:00.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.612 "hdgst": ${hdgst:-false}, 00:11:00.612 "ddgst": ${ddgst:-false} 00:11:00.612 }, 00:11:00.612 "method": "bdev_nvme_attach_controller" 00:11:00.612 } 00:11:00.612 EOF 00:11:00.612 )") 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:00.612 21:10:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.612 "params": { 00:11:00.612 "name": "Nvme0", 00:11:00.612 "trtype": "tcp", 00:11:00.612 "traddr": "10.0.0.2", 00:11:00.612 "adrfam": "ipv4", 00:11:00.612 "trsvcid": "4420", 00:11:00.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:00.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:00.612 "hdgst": false, 00:11:00.612 "ddgst": false 00:11:00.612 }, 00:11:00.612 "method": "bdev_nvme_attach_controller" 00:11:00.612 }' 00:11:00.870 [2024-07-14 21:10:12.200854] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:00.870 [2024-07-14 21:10:12.201029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67637 ] 00:11:00.870 [2024-07-14 21:10:12.370337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.129 [2024-07-14 21:10:12.559445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.387 [2024-07-14 21:10:12.748362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:01.387 Running I/O for 10 seconds... 00:11:01.646 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.646 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:01.646 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:01.646 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.646 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.905 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.906 [2024-07-14 21:10:13.258124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.258666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.258797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.258920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.906 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.259017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:01.906 [2024-07-14 21:10:13.259106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.259192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.906 [2024-07-14 21:10:13.259279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.906 [2024-07-14 21:10:13.259348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.259424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.259516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.259616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.259687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.259805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.259966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.260071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.260264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.260385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.260467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.260545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.260743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.260873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.260959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.261038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.261224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.261348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.261430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.261670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.261783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.261881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.262037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.262294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.262424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.262512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.262597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.262688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.262796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.263007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.263115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.263197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.263426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.263536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.263635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.263830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.264032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.264131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.264232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.264316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.264536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.264646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.264731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.264828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.265025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.265123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.265219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.265393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.265503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.265575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.265653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.265867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.906 [2024-07-14 21:10:13.265979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.906 [2024-07-14 21:10:13.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.266159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.266409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.266537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.266623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.266703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.266803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 21:10:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.907 [2024-07-14 21:10:13.267019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 21:10:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:01.907 [2024-07-14 21:10:13.267117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.267201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.267280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.267470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.267583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.267680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.267773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.267971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.268081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.268160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.268335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.268439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.268544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.268724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.268856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.268950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.269131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.269236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.269307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.269389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.269563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.269667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.269739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.269945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.270043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.270142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.270227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.270550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.270635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.270705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.270908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.271007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.271094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.271319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.271425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.271507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.271614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.271696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.271897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.272012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.272097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.272269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.272381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.272468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.272546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.272781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.272914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.273004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.273225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.273337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.273426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.273506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.273573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.273829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.273944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.274129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.907 [2024-07-14 21:10:13.274243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.274313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:11:01.907 [2024-07-14 21:10:13.274859] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:11:01.907 [2024-07-14 21:10:13.275226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.907 [2024-07-14 21:10:13.275437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.275570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.907 [2024-07-14 21:10:13.275660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.275726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.907 [2024-07-14 21:10:13.275911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.276020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.907 [2024-07-14 21:10:13.276105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.907 [2024-07-14 21:10:13.276169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:11:01.907 task offset: 57216 on job bdev=Nvme0n1 fails 00:11:01.907 00:11:01.907 Latency(us) 00:11:01.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.907 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:01.907 Job: Nvme0n1 ended in about 0.34 seconds with error 00:11:01.907 Verification LBA range: start 0x0 length 0x400 00:11:01.907 Nvme0n1 : 0.34 1119.61 69.98 186.60 0.00 47127.94 13702.98 42181.35 00:11:01.907 =================================================================================================================== 00:11:01.907 Total : 1119.61 69.98 186.60 0.00 47127.94 13702.98 42181.35 00:11:01.908 [2024-07-14 21:10:13.277686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:01.908 [2024-07-14 21:10:13.282707] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.908 [2024-07-14 21:10:13.282891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:11:01.908 [2024-07-14 21:10:13.297382] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67637 00:11:02.844 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67637) - No such process 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:02.844 { 00:11:02.844 "params": { 00:11:02.844 "name": "Nvme$subsystem", 00:11:02.844 "trtype": "$TEST_TRANSPORT", 00:11:02.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:02.844 "adrfam": "ipv4", 00:11:02.844 "trsvcid": "$NVMF_PORT", 00:11:02.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:02.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:02.844 "hdgst": ${hdgst:-false}, 00:11:02.844 "ddgst": ${ddgst:-false} 00:11:02.844 }, 00:11:02.844 "method": "bdev_nvme_attach_controller" 00:11:02.844 } 00:11:02.844 EOF 00:11:02.844 )") 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:02.844 21:10:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:02.844 "params": { 00:11:02.844 "name": "Nvme0", 00:11:02.844 "trtype": "tcp", 00:11:02.844 "traddr": "10.0.0.2", 00:11:02.844 "adrfam": "ipv4", 00:11:02.844 "trsvcid": "4420", 00:11:02.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:02.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:02.844 "hdgst": false, 00:11:02.844 "ddgst": false 00:11:02.844 }, 00:11:02.844 "method": "bdev_nvme_attach_controller" 00:11:02.844 }' 00:11:02.844 [2024-07-14 21:10:14.375485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:02.844 [2024-07-14 21:10:14.375681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67676 ] 00:11:03.103 [2024-07-14 21:10:14.547326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.362 [2024-07-14 21:10:14.730108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.642 [2024-07-14 21:10:14.914713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:03.642 Running I/O for 1 seconds... 00:11:05.019 00:11:05.019 Latency(us) 00:11:05.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:05.019 Verification LBA range: start 0x0 length 0x400 00:11:05.019 Nvme0n1 : 1.05 1408.51 88.03 0.00 0.00 44609.30 5689.72 40751.48 00:11:05.019 =================================================================================================================== 00:11:05.019 Total : 1408.51 88.03 0.00 0.00 44609.30 5689.72 40751.48 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.958 rmmod nvme_tcp 00:11:05.958 rmmod nvme_fabrics 00:11:05.958 rmmod nvme_keyring 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67579 ']' 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67579 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 67579 ']' 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 67579 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67579 00:11:05.958 killing process with pid 67579 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67579' 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 67579 00:11:05.958 21:10:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 67579 00:11:07.337 [2024-07-14 21:10:18.486183] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.337 21:10:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:07.338 21:10:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:07.338 ************************************ 00:11:07.338 END TEST nvmf_host_management 00:11:07.338 ************************************ 00:11:07.338 00:11:07.338 real 0m8.264s 00:11:07.338 user 0m32.298s 00:11:07.338 sys 0m1.589s 00:11:07.338 21:10:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.338 21:10:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 21:10:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.338 21:10:18 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:07.338 21:10:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.338 21:10:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.338 21:10:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.338 ************************************ 00:11:07.338 START TEST nvmf_lvol 00:11:07.338 ************************************ 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:07.338 * Looking for test storage... 00:11:07.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:07.338 Cannot find device "nvmf_tgt_br" 00:11:07.338 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.339 Cannot find device "nvmf_tgt_br2" 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:07.339 Cannot find device "nvmf_tgt_br" 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:07.339 Cannot find device "nvmf_tgt_br2" 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.339 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.598 21:10:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:07.598 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:07.598 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.598 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.598 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.598 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:07.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:07.599 00:11:07.599 --- 10.0.0.2 ping statistics --- 00:11:07.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.599 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:07.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:07.599 00:11:07.599 --- 10.0.0.3 ping statistics --- 00:11:07.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.599 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:07.599 00:11:07.599 --- 10.0.0.1 ping statistics --- 00:11:07.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.599 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67919 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67919 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 67919 ']' 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.599 21:10:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:07.858 [2024-07-14 21:10:19.209736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:07.858 [2024-07-14 21:10:19.209962] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.858 [2024-07-14 21:10:19.384674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.117 [2024-07-14 21:10:19.550677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.117 [2024-07-14 21:10:19.550757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.117 [2024-07-14 21:10:19.550803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.117 [2024-07-14 21:10:19.550818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.117 [2024-07-14 21:10:19.550829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.117 [2024-07-14 21:10:19.551024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.117 [2024-07-14 21:10:19.551344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.117 [2024-07-14 21:10:19.551351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.376 [2024-07-14 21:10:19.721402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.635 21:10:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:08.893 [2024-07-14 21:10:20.432080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.152 21:10:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.412 21:10:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:09.412 21:10:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.671 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:09.671 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:09.930 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:10.189 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7fcefbb1-8a61-4ffe-83e2-987df371366e 00:11:10.189 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7fcefbb1-8a61-4ffe-83e2-987df371366e lvol 20 00:11:10.448 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=06f4c3e6-0d75-4740-8c46-6aa3c51ad445 00:11:10.448 21:10:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:10.707 21:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 06f4c3e6-0d75-4740-8c46-6aa3c51ad445 00:11:10.968 21:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:11.234 [2024-07-14 21:10:22.607121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.234 21:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.502 21:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67995 00:11:11.502 21:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:11.502 21:10:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:12.441 21:10:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 06f4c3e6-0d75-4740-8c46-6aa3c51ad445 MY_SNAPSHOT 00:11:12.699 21:10:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=87e775a8-08dc-45fe-a837-4ae8ff68d25e 00:11:12.699 21:10:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 06f4c3e6-0d75-4740-8c46-6aa3c51ad445 30 00:11:12.957 21:10:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 87e775a8-08dc-45fe-a837-4ae8ff68d25e MY_CLONE 00:11:13.214 21:10:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9d1f4f1b-ccc1-4f61-a6e5-60be7cd6a0ec 00:11:13.215 21:10:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9d1f4f1b-ccc1-4f61-a6e5-60be7cd6a0ec 00:11:13.780 21:10:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67995 00:11:21.888 Initializing NVMe Controllers 00:11:21.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:21.888 Controller IO queue size 128, less than required. 00:11:21.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:21.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:21.888 Initialization complete. Launching workers. 00:11:21.888 ======================================================== 00:11:21.888 Latency(us) 00:11:21.888 Device Information : IOPS MiB/s Average min max 00:11:21.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9197.60 35.93 13924.31 247.46 159143.15 00:11:21.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8894.80 34.75 14393.70 6033.17 167751.20 00:11:21.888 ======================================================== 00:11:21.888 Total : 18092.40 70.67 14155.08 247.46 167751.20 00:11:21.888 00:11:21.888 21:10:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:21.888 21:10:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 06f4c3e6-0d75-4740-8c46-6aa3c51ad445 00:11:22.452 21:10:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fcefbb1-8a61-4ffe-83e2-987df371366e 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.710 rmmod nvme_tcp 00:11:22.710 rmmod nvme_fabrics 00:11:22.710 rmmod nvme_keyring 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67919 ']' 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67919 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 67919 ']' 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 67919 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67919 00:11:22.710 killing process with pid 67919 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67919' 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 67919 00:11:22.710 21:10:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 67919 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:24.081 ************************************ 00:11:24.081 END TEST nvmf_lvol 00:11:24.081 ************************************ 00:11:24.081 00:11:24.081 real 0m16.960s 00:11:24.081 user 1m8.215s 00:11:24.081 sys 0m3.868s 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.081 21:10:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 21:10:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:24.339 21:10:35 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:24.339 21:10:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:24.339 21:10:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.339 21:10:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 ************************************ 00:11:24.339 START TEST nvmf_lvs_grow 00:11:24.339 ************************************ 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:24.339 * Looking for test storage... 00:11:24.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:24.339 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:24.340 Cannot find device "nvmf_tgt_br" 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.340 Cannot find device "nvmf_tgt_br2" 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:24.340 Cannot find device "nvmf_tgt_br" 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:24.340 Cannot find device "nvmf_tgt_br2" 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:24.340 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:24.597 21:10:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:24.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:24.597 00:11:24.597 --- 10.0.0.2 ping statistics --- 00:11:24.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.597 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:24.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:24.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:24.597 00:11:24.597 --- 10.0.0.3 ping statistics --- 00:11:24.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.597 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:24.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:24.597 00:11:24.597 --- 10.0.0.1 ping statistics --- 00:11:24.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.597 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=68332 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 68332 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 68332 ']' 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.597 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.598 21:10:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:24.855 [2024-07-14 21:10:36.239950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:24.855 [2024-07-14 21:10:36.240125] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.112 [2024-07-14 21:10:36.418962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.112 [2024-07-14 21:10:36.655160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.112 [2024-07-14 21:10:36.655228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.112 [2024-07-14 21:10:36.655249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.112 [2024-07-14 21:10:36.655266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.112 [2024-07-14 21:10:36.655280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.112 [2024-07-14 21:10:36.655326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.370 [2024-07-14 21:10:36.854801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.936 21:10:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:26.194 [2024-07-14 21:10:37.504197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:26.194 ************************************ 00:11:26.194 START TEST lvs_grow_clean 00:11:26.194 ************************************ 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:26.194 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:26.195 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.453 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:26.453 21:10:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:26.711 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:26.711 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:26.711 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:26.969 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:26.969 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:26.969 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de lvol 150 00:11:27.228 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=650d645a-973c-4454-bd6a-7ae08c225a19 00:11:27.228 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:27.228 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:27.486 [2024-07-14 21:10:38.867027] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:27.486 [2024-07-14 21:10:38.867144] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:27.486 true 00:11:27.486 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:27.486 21:10:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:27.744 21:10:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:27.744 21:10:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:28.002 21:10:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 650d645a-973c-4454-bd6a-7ae08c225a19 00:11:28.260 21:10:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:28.519 [2024-07-14 21:10:39.815832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.519 21:10:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68414 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68414 /var/tmp/bdevperf.sock 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 68414 ']' 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.519 21:10:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:28.777 [2024-07-14 21:10:40.137955] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:28.777 [2024-07-14 21:10:40.138110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68414 ] 00:11:28.777 [2024-07-14 21:10:40.303601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.036 [2024-07-14 21:10:40.522412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.295 [2024-07-14 21:10:40.696851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:29.554 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.554 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:29.554 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:29.813 Nvme0n1 00:11:29.813 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:30.072 [ 00:11:30.072 { 00:11:30.072 "name": "Nvme0n1", 00:11:30.072 "aliases": [ 00:11:30.072 "650d645a-973c-4454-bd6a-7ae08c225a19" 00:11:30.072 ], 00:11:30.072 "product_name": "NVMe disk", 00:11:30.072 "block_size": 4096, 00:11:30.072 "num_blocks": 38912, 00:11:30.072 "uuid": "650d645a-973c-4454-bd6a-7ae08c225a19", 00:11:30.072 "assigned_rate_limits": { 00:11:30.072 "rw_ios_per_sec": 0, 00:11:30.072 "rw_mbytes_per_sec": 0, 00:11:30.072 "r_mbytes_per_sec": 0, 00:11:30.072 "w_mbytes_per_sec": 0 00:11:30.072 }, 00:11:30.072 "claimed": false, 00:11:30.072 "zoned": false, 00:11:30.072 "supported_io_types": { 00:11:30.072 "read": true, 00:11:30.072 "write": true, 00:11:30.072 "unmap": true, 00:11:30.072 "flush": true, 00:11:30.072 "reset": true, 00:11:30.072 "nvme_admin": true, 00:11:30.072 "nvme_io": true, 00:11:30.072 "nvme_io_md": false, 00:11:30.072 "write_zeroes": true, 00:11:30.072 "zcopy": false, 00:11:30.072 "get_zone_info": false, 00:11:30.072 "zone_management": false, 00:11:30.072 "zone_append": false, 00:11:30.072 "compare": true, 00:11:30.072 "compare_and_write": true, 00:11:30.072 "abort": true, 00:11:30.072 "seek_hole": false, 00:11:30.072 "seek_data": false, 00:11:30.072 "copy": true, 00:11:30.072 "nvme_iov_md": false 00:11:30.072 }, 00:11:30.072 "memory_domains": [ 00:11:30.072 { 00:11:30.072 "dma_device_id": "system", 00:11:30.073 "dma_device_type": 1 00:11:30.073 } 00:11:30.073 ], 00:11:30.073 "driver_specific": { 00:11:30.073 "nvme": [ 00:11:30.073 { 00:11:30.073 "trid": { 00:11:30.073 "trtype": "TCP", 00:11:30.073 "adrfam": "IPv4", 00:11:30.073 "traddr": "10.0.0.2", 00:11:30.073 "trsvcid": "4420", 00:11:30.073 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:30.073 }, 00:11:30.073 "ctrlr_data": { 00:11:30.073 "cntlid": 1, 00:11:30.073 "vendor_id": "0x8086", 00:11:30.073 "model_number": "SPDK bdev Controller", 00:11:30.073 "serial_number": "SPDK0", 00:11:30.073 "firmware_revision": "24.09", 00:11:30.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:30.073 "oacs": { 00:11:30.073 "security": 0, 00:11:30.073 "format": 0, 00:11:30.073 "firmware": 0, 00:11:30.073 "ns_manage": 0 00:11:30.073 }, 00:11:30.073 "multi_ctrlr": true, 00:11:30.073 "ana_reporting": false 00:11:30.073 }, 00:11:30.073 "vs": { 00:11:30.073 "nvme_version": "1.3" 00:11:30.073 }, 00:11:30.073 "ns_data": { 00:11:30.073 "id": 1, 00:11:30.073 "can_share": true 00:11:30.073 } 00:11:30.073 } 00:11:30.073 ], 00:11:30.073 "mp_policy": "active_passive" 00:11:30.073 } 00:11:30.073 } 00:11:30.073 ] 00:11:30.073 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:30.073 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68438 00:11:30.073 21:10:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:30.332 Running I/O for 10 seconds... 00:11:31.268 Latency(us) 00:11:31.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.268 Nvme0n1 : 1.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:31.268 =================================================================================================================== 00:11:31.268 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:31.268 00:11:32.204 21:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:32.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.204 Nvme0n1 : 2.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:32.204 =================================================================================================================== 00:11:32.204 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:32.204 00:11:32.463 true 00:11:32.463 21:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:32.463 21:10:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:32.721 21:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:32.721 21:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:32.721 21:10:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68438 00:11:33.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.286 Nvme0n1 : 3.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:33.286 =================================================================================================================== 00:11:33.286 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:33.286 00:11:34.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.288 Nvme0n1 : 4.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:34.288 =================================================================================================================== 00:11:34.288 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:34.288 00:11:35.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.224 Nvme0n1 : 5.00 5918.20 23.12 0.00 0.00 0.00 0.00 0.00 00:11:35.224 =================================================================================================================== 00:11:35.224 Total : 5918.20 23.12 0.00 0.00 0.00 0.00 0.00 00:11:35.224 00:11:36.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.159 Nvme0n1 : 6.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:36.159 =================================================================================================================== 00:11:36.159 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:36.159 00:11:37.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.536 Nvme0n1 : 7.00 5896.43 23.03 0.00 0.00 0.00 0.00 0.00 00:11:37.536 =================================================================================================================== 00:11:37.536 Total : 5896.43 23.03 0.00 0.00 0.00 0.00 0.00 00:11:37.536 00:11:38.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.473 Nvme0n1 : 8.00 5937.25 23.19 0.00 0.00 0.00 0.00 0.00 00:11:38.473 =================================================================================================================== 00:11:38.473 Total : 5937.25 23.19 0.00 0.00 0.00 0.00 0.00 00:11:38.473 00:11:39.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.410 Nvme0n1 : 9.00 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:11:39.410 =================================================================================================================== 00:11:39.410 Total : 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:11:39.410 00:11:40.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.348 Nvme0n1 : 10.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:40.348 =================================================================================================================== 00:11:40.348 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:40.348 00:11:40.348 00:11:40.348 Latency(us) 00:11:40.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.348 Nvme0n1 : 10.01 5847.64 22.84 0.00 0.00 21881.32 19065.02 78643.20 00:11:40.348 =================================================================================================================== 00:11:40.348 Total : 5847.64 22.84 0.00 0.00 21881.32 19065.02 78643.20 00:11:40.348 0 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68414 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 68414 ']' 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 68414 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68414 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:40.348 killing process with pid 68414 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68414' 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 68414 00:11:40.348 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.348 00:11:40.348 Latency(us) 00:11:40.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.348 =================================================================================================================== 00:11:40.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:40.348 21:10:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 68414 00:11:41.281 21:10:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.537 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:41.796 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:41.796 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:42.054 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:42.054 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:42.054 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:42.313 [2024-07-14 21:10:53.798695] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:42.313 21:10:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:42.571 request: 00:11:42.571 { 00:11:42.571 "uuid": "0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de", 00:11:42.571 "method": "bdev_lvol_get_lvstores", 00:11:42.571 "req_id": 1 00:11:42.571 } 00:11:42.571 Got JSON-RPC error response 00:11:42.571 response: 00:11:42.571 { 00:11:42.571 "code": -19, 00:11:42.571 "message": "No such device" 00:11:42.571 } 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:42.830 aio_bdev 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 650d645a-973c-4454-bd6a-7ae08c225a19 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=650d645a-973c-4454-bd6a-7ae08c225a19 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:42.830 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:43.088 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 650d645a-973c-4454-bd6a-7ae08c225a19 -t 2000 00:11:43.347 [ 00:11:43.347 { 00:11:43.347 "name": "650d645a-973c-4454-bd6a-7ae08c225a19", 00:11:43.347 "aliases": [ 00:11:43.347 "lvs/lvol" 00:11:43.347 ], 00:11:43.347 "product_name": "Logical Volume", 00:11:43.347 "block_size": 4096, 00:11:43.347 "num_blocks": 38912, 00:11:43.347 "uuid": "650d645a-973c-4454-bd6a-7ae08c225a19", 00:11:43.347 "assigned_rate_limits": { 00:11:43.347 "rw_ios_per_sec": 0, 00:11:43.347 "rw_mbytes_per_sec": 0, 00:11:43.347 "r_mbytes_per_sec": 0, 00:11:43.347 "w_mbytes_per_sec": 0 00:11:43.347 }, 00:11:43.347 "claimed": false, 00:11:43.347 "zoned": false, 00:11:43.347 "supported_io_types": { 00:11:43.347 "read": true, 00:11:43.347 "write": true, 00:11:43.347 "unmap": true, 00:11:43.347 "flush": false, 00:11:43.347 "reset": true, 00:11:43.347 "nvme_admin": false, 00:11:43.347 "nvme_io": false, 00:11:43.347 "nvme_io_md": false, 00:11:43.348 "write_zeroes": true, 00:11:43.348 "zcopy": false, 00:11:43.348 "get_zone_info": false, 00:11:43.348 "zone_management": false, 00:11:43.348 "zone_append": false, 00:11:43.348 "compare": false, 00:11:43.348 "compare_and_write": false, 00:11:43.348 "abort": false, 00:11:43.348 "seek_hole": true, 00:11:43.348 "seek_data": true, 00:11:43.348 "copy": false, 00:11:43.348 "nvme_iov_md": false 00:11:43.348 }, 00:11:43.348 "driver_specific": { 00:11:43.348 "lvol": { 00:11:43.348 "lvol_store_uuid": "0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de", 00:11:43.348 "base_bdev": "aio_bdev", 00:11:43.348 "thin_provision": false, 00:11:43.348 "num_allocated_clusters": 38, 00:11:43.348 "snapshot": false, 00:11:43.348 "clone": false, 00:11:43.348 "esnap_clone": false 00:11:43.348 } 00:11:43.348 } 00:11:43.348 } 00:11:43.348 ] 00:11:43.348 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:43.348 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:43.348 21:10:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:43.606 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:43.606 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:43.606 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:43.864 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:43.864 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 650d645a-973c-4454-bd6a-7ae08c225a19 00:11:44.122 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0089db4b-a6b2-4e8c-a2ec-02b44c0ae6de 00:11:44.381 21:10:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:44.639 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:44.898 ************************************ 00:11:44.898 END TEST lvs_grow_clean 00:11:44.898 ************************************ 00:11:44.898 00:11:44.898 real 0m18.851s 00:11:44.898 user 0m17.822s 00:11:44.898 sys 0m2.309s 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:44.898 ************************************ 00:11:44.898 START TEST lvs_grow_dirty 00:11:44.898 ************************************ 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:44.898 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:45.156 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:45.414 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:45.414 21:10:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:45.672 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=caaad993-3009-41ae-9885-aaf6ff2ac77c 00:11:45.672 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:11:45.672 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:45.930 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:45.930 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:45.930 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u caaad993-3009-41ae-9885-aaf6ff2ac77c lvol 150 00:11:46.188 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5cfd192f-34ac-430f-ba78-916245b22cb3 00:11:46.188 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:46.188 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:46.446 [2024-07-14 21:10:57.757003] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:46.446 [2024-07-14 21:10:57.757124] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:46.446 true 00:11:46.446 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:11:46.446 21:10:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:46.705 21:10:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:46.705 21:10:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:46.964 21:10:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5cfd192f-34ac-430f-ba78-916245b22cb3 00:11:47.223 21:10:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:47.481 [2024-07-14 21:10:58.801766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.481 21:10:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68699 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68699 /var/tmp/bdevperf.sock 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 68699 ']' 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.739 21:10:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:47.739 [2024-07-14 21:10:59.127823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:47.739 [2024-07-14 21:10:59.128264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68699 ] 00:11:47.739 [2024-07-14 21:10:59.285035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.997 [2024-07-14 21:10:59.438571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.255 [2024-07-14 21:10:59.604233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:48.512 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.512 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:48.513 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:48.771 Nvme0n1 00:11:48.771 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:49.030 [ 00:11:49.030 { 00:11:49.030 "name": "Nvme0n1", 00:11:49.030 "aliases": [ 00:11:49.030 "5cfd192f-34ac-430f-ba78-916245b22cb3" 00:11:49.030 ], 00:11:49.030 "product_name": "NVMe disk", 00:11:49.030 "block_size": 4096, 00:11:49.030 "num_blocks": 38912, 00:11:49.030 "uuid": "5cfd192f-34ac-430f-ba78-916245b22cb3", 00:11:49.030 "assigned_rate_limits": { 00:11:49.030 "rw_ios_per_sec": 0, 00:11:49.030 "rw_mbytes_per_sec": 0, 00:11:49.030 "r_mbytes_per_sec": 0, 00:11:49.030 "w_mbytes_per_sec": 0 00:11:49.030 }, 00:11:49.030 "claimed": false, 00:11:49.030 "zoned": false, 00:11:49.030 "supported_io_types": { 00:11:49.030 "read": true, 00:11:49.030 "write": true, 00:11:49.030 "unmap": true, 00:11:49.030 "flush": true, 00:11:49.030 "reset": true, 00:11:49.030 "nvme_admin": true, 00:11:49.030 "nvme_io": true, 00:11:49.030 "nvme_io_md": false, 00:11:49.030 "write_zeroes": true, 00:11:49.030 "zcopy": false, 00:11:49.030 "get_zone_info": false, 00:11:49.030 "zone_management": false, 00:11:49.030 "zone_append": false, 00:11:49.030 "compare": true, 00:11:49.030 "compare_and_write": true, 00:11:49.030 "abort": true, 00:11:49.030 "seek_hole": false, 00:11:49.030 "seek_data": false, 00:11:49.030 "copy": true, 00:11:49.030 "nvme_iov_md": false 00:11:49.030 }, 00:11:49.030 "memory_domains": [ 00:11:49.030 { 00:11:49.030 "dma_device_id": "system", 00:11:49.030 "dma_device_type": 1 00:11:49.030 } 00:11:49.030 ], 00:11:49.030 "driver_specific": { 00:11:49.030 "nvme": [ 00:11:49.030 { 00:11:49.030 "trid": { 00:11:49.030 "trtype": "TCP", 00:11:49.030 "adrfam": "IPv4", 00:11:49.030 "traddr": "10.0.0.2", 00:11:49.030 "trsvcid": "4420", 00:11:49.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:49.030 }, 00:11:49.030 "ctrlr_data": { 00:11:49.030 "cntlid": 1, 00:11:49.030 "vendor_id": "0x8086", 00:11:49.030 "model_number": "SPDK bdev Controller", 00:11:49.030 "serial_number": "SPDK0", 00:11:49.030 "firmware_revision": "24.09", 00:11:49.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:49.030 "oacs": { 00:11:49.030 "security": 0, 00:11:49.030 "format": 0, 00:11:49.030 "firmware": 0, 00:11:49.030 "ns_manage": 0 00:11:49.030 }, 00:11:49.030 "multi_ctrlr": true, 00:11:49.030 "ana_reporting": false 00:11:49.030 }, 00:11:49.030 "vs": { 00:11:49.030 "nvme_version": "1.3" 00:11:49.030 }, 00:11:49.030 "ns_data": { 00:11:49.030 "id": 1, 00:11:49.030 "can_share": true 00:11:49.030 } 00:11:49.030 } 00:11:49.030 ], 00:11:49.030 "mp_policy": "active_passive" 00:11:49.030 } 00:11:49.030 } 00:11:49.030 ] 00:11:49.289 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68723 00:11:49.289 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:49.289 21:11:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:49.289 Running I/O for 10 seconds... 00:11:50.224 Latency(us) 00:11:50.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.224 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:11:50.224 =================================================================================================================== 00:11:50.224 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:11:50.224 00:11:51.160 21:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:11:51.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.160 Nvme0n1 : 2.00 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:11:51.160 =================================================================================================================== 00:11:51.160 Total : 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:11:51.160 00:11:51.418 true 00:11:51.418 21:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:11:51.418 21:11:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:51.676 21:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:51.676 21:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:51.676 21:11:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68723 00:11:52.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.241 Nvme0n1 : 3.00 6053.67 23.65 0.00 0.00 0.00 0.00 0.00 00:11:52.241 =================================================================================================================== 00:11:52.241 Total : 6053.67 23.65 0.00 0.00 0.00 0.00 0.00 00:11:52.241 00:11:53.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.174 Nvme0n1 : 4.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:53.174 =================================================================================================================== 00:11:53.174 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:11:53.174 00:11:54.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.552 Nvme0n1 : 5.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:54.552 =================================================================================================================== 00:11:54.552 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:54.552 00:11:55.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.488 Nvme0n1 : 6.00 5863.17 22.90 0.00 0.00 0.00 0.00 0.00 00:11:55.488 =================================================================================================================== 00:11:55.488 Total : 5863.17 22.90 0.00 0.00 0.00 0.00 0.00 00:11:55.488 00:11:56.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.424 Nvme0n1 : 7.00 5860.14 22.89 0.00 0.00 0.00 0.00 0.00 00:11:56.424 =================================================================================================================== 00:11:56.424 Total : 5860.14 22.89 0.00 0.00 0.00 0.00 0.00 00:11:56.424 00:11:57.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.361 Nvme0n1 : 8.00 5857.88 22.88 0.00 0.00 0.00 0.00 0.00 00:11:57.361 =================================================================================================================== 00:11:57.361 Total : 5857.88 22.88 0.00 0.00 0.00 0.00 0.00 00:11:57.361 00:11:58.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.297 Nvme0n1 : 9.00 5827.89 22.77 0.00 0.00 0.00 0.00 0.00 00:11:58.297 =================================================================================================================== 00:11:58.297 Total : 5827.89 22.77 0.00 0.00 0.00 0.00 0.00 00:11:58.297 00:11:59.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.231 Nvme0n1 : 10.00 5816.60 22.72 0.00 0.00 0.00 0.00 0.00 00:11:59.231 =================================================================================================================== 00:11:59.231 Total : 5816.60 22.72 0.00 0.00 0.00 0.00 0.00 00:11:59.231 00:11:59.231 00:11:59.231 Latency(us) 00:11:59.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.231 Nvme0n1 : 10.01 5820.84 22.74 0.00 0.00 21982.23 17515.99 96754.97 00:11:59.231 =================================================================================================================== 00:11:59.231 Total : 5820.84 22.74 0.00 0.00 21982.23 17515.99 96754.97 00:11:59.231 0 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68699 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 68699 ']' 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 68699 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68699 00:11:59.231 killing process with pid 68699 00:11:59.231 Received shutdown signal, test time was about 10.000000 seconds 00:11:59.231 00:11:59.231 Latency(us) 00:11:59.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.231 =================================================================================================================== 00:11:59.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68699' 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 68699 00:11:59.231 21:11:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 68699 00:12:00.605 21:11:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.605 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:00.863 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:00.863 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:01.120 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:01.120 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:01.120 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 68332 00:12:01.120 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 68332 00:12:01.378 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 68332 Killed "${NVMF_APP[@]}" "$@" 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68863 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68863 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 68863 ']' 00:12:01.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.378 21:11:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 [2024-07-14 21:11:12.830600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:01.378 [2024-07-14 21:11:12.830755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.635 [2024-07-14 21:11:13.004813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.892 [2024-07-14 21:11:13.208402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.892 [2024-07-14 21:11:13.208482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.892 [2024-07-14 21:11:13.208498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.892 [2024-07-14 21:11:13.208510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.892 [2024-07-14 21:11:13.208520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.892 [2024-07-14 21:11:13.208555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.892 [2024-07-14 21:11:13.392156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.454 21:11:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:02.711 [2024-07-14 21:11:14.040975] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:02.711 [2024-07-14 21:11:14.041359] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:02.711 [2024-07-14 21:11:14.041623] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5cfd192f-34ac-430f-ba78-916245b22cb3 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5cfd192f-34ac-430f-ba78-916245b22cb3 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:02.711 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:02.969 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5cfd192f-34ac-430f-ba78-916245b22cb3 -t 2000 00:12:03.226 [ 00:12:03.226 { 00:12:03.226 "name": "5cfd192f-34ac-430f-ba78-916245b22cb3", 00:12:03.226 "aliases": [ 00:12:03.226 "lvs/lvol" 00:12:03.226 ], 00:12:03.226 "product_name": "Logical Volume", 00:12:03.226 "block_size": 4096, 00:12:03.226 "num_blocks": 38912, 00:12:03.226 "uuid": "5cfd192f-34ac-430f-ba78-916245b22cb3", 00:12:03.226 "assigned_rate_limits": { 00:12:03.226 "rw_ios_per_sec": 0, 00:12:03.226 "rw_mbytes_per_sec": 0, 00:12:03.226 "r_mbytes_per_sec": 0, 00:12:03.226 "w_mbytes_per_sec": 0 00:12:03.226 }, 00:12:03.226 "claimed": false, 00:12:03.226 "zoned": false, 00:12:03.226 "supported_io_types": { 00:12:03.226 "read": true, 00:12:03.226 "write": true, 00:12:03.226 "unmap": true, 00:12:03.226 "flush": false, 00:12:03.226 "reset": true, 00:12:03.226 "nvme_admin": false, 00:12:03.226 "nvme_io": false, 00:12:03.226 "nvme_io_md": false, 00:12:03.226 "write_zeroes": true, 00:12:03.226 "zcopy": false, 00:12:03.226 "get_zone_info": false, 00:12:03.226 "zone_management": false, 00:12:03.226 "zone_append": false, 00:12:03.226 "compare": false, 00:12:03.226 "compare_and_write": false, 00:12:03.226 "abort": false, 00:12:03.226 "seek_hole": true, 00:12:03.226 "seek_data": true, 00:12:03.226 "copy": false, 00:12:03.226 "nvme_iov_md": false 00:12:03.226 }, 00:12:03.226 "driver_specific": { 00:12:03.226 "lvol": { 00:12:03.226 "lvol_store_uuid": "caaad993-3009-41ae-9885-aaf6ff2ac77c", 00:12:03.226 "base_bdev": "aio_bdev", 00:12:03.226 "thin_provision": false, 00:12:03.226 "num_allocated_clusters": 38, 00:12:03.226 "snapshot": false, 00:12:03.226 "clone": false, 00:12:03.226 "esnap_clone": false 00:12:03.226 } 00:12:03.226 } 00:12:03.226 } 00:12:03.226 ] 00:12:03.226 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:03.226 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:03.226 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:03.484 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:03.484 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:03.484 21:11:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:03.742 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:03.742 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:03.742 [2024-07-14 21:11:15.270694] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:04.000 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:04.259 request: 00:12:04.259 { 00:12:04.259 "uuid": "caaad993-3009-41ae-9885-aaf6ff2ac77c", 00:12:04.259 "method": "bdev_lvol_get_lvstores", 00:12:04.259 "req_id": 1 00:12:04.259 } 00:12:04.259 Got JSON-RPC error response 00:12:04.259 response: 00:12:04.259 { 00:12:04.259 "code": -19, 00:12:04.259 "message": "No such device" 00:12:04.259 } 00:12:04.259 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:04.259 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:04.259 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:04.259 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:04.259 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:04.517 aio_bdev 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5cfd192f-34ac-430f-ba78-916245b22cb3 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5cfd192f-34ac-430f-ba78-916245b22cb3 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:04.517 21:11:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:04.517 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5cfd192f-34ac-430f-ba78-916245b22cb3 -t 2000 00:12:04.776 [ 00:12:04.776 { 00:12:04.776 "name": "5cfd192f-34ac-430f-ba78-916245b22cb3", 00:12:04.776 "aliases": [ 00:12:04.776 "lvs/lvol" 00:12:04.776 ], 00:12:04.776 "product_name": "Logical Volume", 00:12:04.776 "block_size": 4096, 00:12:04.776 "num_blocks": 38912, 00:12:04.776 "uuid": "5cfd192f-34ac-430f-ba78-916245b22cb3", 00:12:04.776 "assigned_rate_limits": { 00:12:04.776 "rw_ios_per_sec": 0, 00:12:04.776 "rw_mbytes_per_sec": 0, 00:12:04.776 "r_mbytes_per_sec": 0, 00:12:04.776 "w_mbytes_per_sec": 0 00:12:04.776 }, 00:12:04.776 "claimed": false, 00:12:04.776 "zoned": false, 00:12:04.776 "supported_io_types": { 00:12:04.776 "read": true, 00:12:04.776 "write": true, 00:12:04.776 "unmap": true, 00:12:04.776 "flush": false, 00:12:04.776 "reset": true, 00:12:04.776 "nvme_admin": false, 00:12:04.776 "nvme_io": false, 00:12:04.776 "nvme_io_md": false, 00:12:04.776 "write_zeroes": true, 00:12:04.776 "zcopy": false, 00:12:04.776 "get_zone_info": false, 00:12:04.776 "zone_management": false, 00:12:04.776 "zone_append": false, 00:12:04.776 "compare": false, 00:12:04.776 "compare_and_write": false, 00:12:04.776 "abort": false, 00:12:04.776 "seek_hole": true, 00:12:04.776 "seek_data": true, 00:12:04.776 "copy": false, 00:12:04.776 "nvme_iov_md": false 00:12:04.776 }, 00:12:04.776 "driver_specific": { 00:12:04.776 "lvol": { 00:12:04.776 "lvol_store_uuid": "caaad993-3009-41ae-9885-aaf6ff2ac77c", 00:12:04.776 "base_bdev": "aio_bdev", 00:12:04.776 "thin_provision": false, 00:12:04.776 "num_allocated_clusters": 38, 00:12:04.776 "snapshot": false, 00:12:04.776 "clone": false, 00:12:04.776 "esnap_clone": false 00:12:04.776 } 00:12:04.776 } 00:12:04.776 } 00:12:04.776 ] 00:12:04.776 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:04.776 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:04.776 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:05.035 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:05.035 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:05.035 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:05.293 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:05.293 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5cfd192f-34ac-430f-ba78-916245b22cb3 00:12:05.551 21:11:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u caaad993-3009-41ae-9885-aaf6ff2ac77c 00:12:05.809 21:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:06.068 21:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:06.326 ************************************ 00:12:06.326 END TEST lvs_grow_dirty 00:12:06.326 ************************************ 00:12:06.326 00:12:06.326 real 0m21.359s 00:12:06.326 user 0m45.886s 00:12:06.326 sys 0m8.246s 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:06.326 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:06.326 nvmf_trace.0 00:12:06.586 21:11:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:06.586 21:11:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:06.586 21:11:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.586 21:11:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.845 rmmod nvme_tcp 00:12:06.845 rmmod nvme_fabrics 00:12:06.845 rmmod nvme_keyring 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68863 ']' 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68863 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 68863 ']' 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 68863 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68863 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.845 killing process with pid 68863 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68863' 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 68863 00:12:06.845 21:11:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 68863 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:07.782 00:12:07.782 real 0m43.657s 00:12:07.782 user 1m10.813s 00:12:07.782 sys 0m11.504s 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.782 ************************************ 00:12:07.782 END TEST nvmf_lvs_grow 00:12:07.782 ************************************ 00:12:07.782 21:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 21:11:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:08.086 21:11:19 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:08.086 21:11:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.086 21:11:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.086 21:11:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 ************************************ 00:12:08.086 START TEST nvmf_bdev_io_wait 00:12:08.086 ************************************ 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:08.086 * Looking for test storage... 00:12:08.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:08.086 Cannot find device "nvmf_tgt_br" 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.086 Cannot find device "nvmf_tgt_br2" 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:08.086 Cannot find device "nvmf_tgt_br" 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:08.086 Cannot find device "nvmf_tgt_br2" 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:08.086 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:08.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:08.345 00:12:08.345 --- 10.0.0.2 ping statistics --- 00:12:08.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.345 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:08.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:08.345 00:12:08.345 --- 10.0.0.3 ping statistics --- 00:12:08.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.345 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:08.345 00:12:08.345 --- 10.0.0.1 ping statistics --- 00:12:08.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.345 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69195 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69195 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 69195 ']' 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.345 21:11:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:08.603 [2024-07-14 21:11:19.977025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:08.603 [2024-07-14 21:11:19.977818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.861 [2024-07-14 21:11:20.154469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.861 [2024-07-14 21:11:20.339477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.861 [2024-07-14 21:11:20.339544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.861 [2024-07-14 21:11:20.339562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.861 [2024-07-14 21:11:20.339576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.861 [2024-07-14 21:11:20.339602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.861 [2024-07-14 21:11:20.340444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.861 [2024-07-14 21:11:20.340643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.861 [2024-07-14 21:11:20.341296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.861 [2024-07-14 21:11:20.341328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.428 21:11:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.687 [2024-07-14 21:11:21.159798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.688 [2024-07-14 21:11:21.176867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.688 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.947 Malloc0 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.947 [2024-07-14 21:11:21.287191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69236 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69238 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:09.947 { 00:12:09.947 "params": { 00:12:09.947 "name": "Nvme$subsystem", 00:12:09.947 "trtype": "$TEST_TRANSPORT", 00:12:09.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.947 "adrfam": "ipv4", 00:12:09.947 "trsvcid": "$NVMF_PORT", 00:12:09.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.947 "hdgst": ${hdgst:-false}, 00:12:09.947 "ddgst": ${ddgst:-false} 00:12:09.947 }, 00:12:09.947 "method": "bdev_nvme_attach_controller" 00:12:09.947 } 00:12:09.947 EOF 00:12:09.947 )") 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69240 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:09.947 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:09.947 { 00:12:09.947 "params": { 00:12:09.948 "name": "Nvme$subsystem", 00:12:09.948 "trtype": "$TEST_TRANSPORT", 00:12:09.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "$NVMF_PORT", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.948 "hdgst": ${hdgst:-false}, 00:12:09.948 "ddgst": ${ddgst:-false} 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 } 00:12:09.948 EOF 00:12:09.948 )") 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69242 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:09.948 { 00:12:09.948 "params": { 00:12:09.948 "name": "Nvme$subsystem", 00:12:09.948 "trtype": "$TEST_TRANSPORT", 00:12:09.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "$NVMF_PORT", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.948 "hdgst": ${hdgst:-false}, 00:12:09.948 "ddgst": ${ddgst:-false} 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 } 00:12:09.948 EOF 00:12:09.948 )") 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:09.948 "params": { 00:12:09.948 "name": "Nvme1", 00:12:09.948 "trtype": "tcp", 00:12:09.948 "traddr": "10.0.0.2", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "4420", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.948 "hdgst": false, 00:12:09.948 "ddgst": false 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 }' 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:09.948 { 00:12:09.948 "params": { 00:12:09.948 "name": "Nvme$subsystem", 00:12:09.948 "trtype": "$TEST_TRANSPORT", 00:12:09.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "$NVMF_PORT", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.948 "hdgst": ${hdgst:-false}, 00:12:09.948 "ddgst": ${ddgst:-false} 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 } 00:12:09.948 EOF 00:12:09.948 )") 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:09.948 "params": { 00:12:09.948 "name": "Nvme1", 00:12:09.948 "trtype": "tcp", 00:12:09.948 "traddr": "10.0.0.2", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "4420", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.948 "hdgst": false, 00:12:09.948 "ddgst": false 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 }' 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:09.948 "params": { 00:12:09.948 "name": "Nvme1", 00:12:09.948 "trtype": "tcp", 00:12:09.948 "traddr": "10.0.0.2", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "4420", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.948 "hdgst": false, 00:12:09.948 "ddgst": false 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 }' 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:09.948 "params": { 00:12:09.948 "name": "Nvme1", 00:12:09.948 "trtype": "tcp", 00:12:09.948 "traddr": "10.0.0.2", 00:12:09.948 "adrfam": "ipv4", 00:12:09.948 "trsvcid": "4420", 00:12:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.948 "hdgst": false, 00:12:09.948 "ddgst": false 00:12:09.948 }, 00:12:09.948 "method": "bdev_nvme_attach_controller" 00:12:09.948 }' 00:12:09.948 21:11:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69236 00:12:09.948 [2024-07-14 21:11:21.401010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:09.948 [2024-07-14 21:11:21.401182] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:09.948 [2024-07-14 21:11:21.407631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:09.948 [2024-07-14 21:11:21.407802] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:09.948 [2024-07-14 21:11:21.427485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:09.948 [2024-07-14 21:11:21.427690] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:09.948 [2024-07-14 21:11:21.440459] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:09.948 [2024-07-14 21:11:21.440621] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:10.207 [2024-07-14 21:11:21.616250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.207 [2024-07-14 21:11:21.660233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.207 [2024-07-14 21:11:21.705305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.207 [2024-07-14 21:11:21.742387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.466 [2024-07-14 21:11:21.787338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:10.466 [2024-07-14 21:11:21.850574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:10.466 [2024-07-14 21:11:21.915835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:10.466 [2024-07-14 21:11:21.968901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:10.466 [2024-07-14 21:11:22.002627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:10.724 [2024-07-14 21:11:22.033426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:10.724 [2024-07-14 21:11:22.112669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:10.724 Running I/O for 1 seconds... 00:12:10.724 Running I/O for 1 seconds... 00:12:10.724 [2024-07-14 21:11:22.236495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:10.983 Running I/O for 1 seconds... 00:12:10.983 Running I/O for 1 seconds... 00:12:11.917 00:12:11.917 Latency(us) 00:12:11.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.917 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:11.917 Nvme1n1 : 1.01 8646.82 33.78 0.00 0.00 14730.18 5362.04 21805.61 00:12:11.917 =================================================================================================================== 00:12:11.917 Total : 8646.82 33.78 0.00 0.00 14730.18 5362.04 21805.61 00:12:11.917 00:12:11.917 Latency(us) 00:12:11.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.917 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:11.917 Nvme1n1 : 1.01 5992.91 23.41 0.00 0.00 21203.17 8996.31 34078.72 00:12:11.917 =================================================================================================================== 00:12:11.917 Total : 5992.91 23.41 0.00 0.00 21203.17 8996.31 34078.72 00:12:11.917 00:12:11.917 Latency(us) 00:12:11.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.917 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:11.917 Nvme1n1 : 1.01 7509.40 29.33 0.00 0.00 16949.30 9055.88 26214.40 00:12:11.917 =================================================================================================================== 00:12:11.917 Total : 7509.40 29.33 0.00 0.00 16949.30 9055.88 26214.40 00:12:11.917 00:12:11.917 Latency(us) 00:12:11.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.917 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:11.917 Nvme1n1 : 1.00 140416.24 548.50 0.00 0.00 908.44 458.01 1854.37 00:12:11.917 =================================================================================================================== 00:12:11.917 Total : 140416.24 548.50 0.00 0.00 908.44 458.01 1854.37 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69238 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69240 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69242 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:12.852 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.110 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.110 rmmod nvme_tcp 00:12:13.110 rmmod nvme_fabrics 00:12:13.110 rmmod nvme_keyring 00:12:13.110 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69195 ']' 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69195 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 69195 ']' 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 69195 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69195 00:12:13.111 killing process with pid 69195 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69195' 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 69195 00:12:13.111 21:11:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 69195 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:14.049 ************************************ 00:12:14.049 END TEST nvmf_bdev_io_wait 00:12:14.049 ************************************ 00:12:14.049 00:12:14.049 real 0m6.151s 00:12:14.049 user 0m28.366s 00:12:14.049 sys 0m2.693s 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.049 21:11:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:14.049 21:11:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:14.049 21:11:25 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:14.049 21:11:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.049 21:11:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.049 21:11:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.049 ************************************ 00:12:14.049 START TEST nvmf_queue_depth 00:12:14.049 ************************************ 00:12:14.049 21:11:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:14.308 * Looking for test storage... 00:12:14.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:14.308 Cannot find device "nvmf_tgt_br" 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.308 Cannot find device "nvmf_tgt_br2" 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:14.308 Cannot find device "nvmf_tgt_br" 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:14.308 Cannot find device "nvmf_tgt_br2" 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:14.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.308 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:14.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:14.309 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.567 21:11:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:14.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:14.567 00:12:14.568 --- 10.0.0.2 ping statistics --- 00:12:14.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.568 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:14.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:14.568 00:12:14.568 --- 10.0.0.3 ping statistics --- 00:12:14.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.568 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:14.568 00:12:14.568 --- 10.0.0.1 ping statistics --- 00:12:14.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.568 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69504 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69504 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 69504 ']' 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.568 21:11:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 [2024-07-14 21:11:26.154611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:14.827 [2024-07-14 21:11:26.154829] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.827 [2024-07-14 21:11:26.331446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.086 [2024-07-14 21:11:26.504111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.086 [2024-07-14 21:11:26.504169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.086 [2024-07-14 21:11:26.504200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.086 [2024-07-14 21:11:26.504213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.086 [2024-07-14 21:11:26.504223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.086 [2024-07-14 21:11:26.504259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.345 [2024-07-14 21:11:26.674599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.604 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.863 [2024-07-14 21:11:27.152139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.863 Malloc0 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.863 [2024-07-14 21:11:27.247958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69542 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69542 /var/tmp/bdevperf.sock 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 69542 ']' 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.863 21:11:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.863 [2024-07-14 21:11:27.360045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:15.863 [2024-07-14 21:11:27.360500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69542 ] 00:12:16.123 [2024-07-14 21:11:27.529343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.382 [2024-07-14 21:11:27.721194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.382 [2024-07-14 21:11:27.897849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:16.951 NVMe0n1 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.951 21:11:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:16.951 Running I/O for 10 seconds... 00:12:29.168 00:12:29.168 Latency(us) 00:12:29.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.168 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:29.168 Verification LBA range: start 0x0 length 0x4000 00:12:29.168 NVMe0n1 : 10.11 6109.21 23.86 0.00 0.00 166560.42 17873.45 120109.61 00:12:29.168 =================================================================================================================== 00:12:29.168 Total : 6109.21 23.86 0.00 0.00 166560.42 17873.45 120109.61 00:12:29.168 0 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69542 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 69542 ']' 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 69542 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69542 00:12:29.168 killing process with pid 69542 00:12:29.168 Received shutdown signal, test time was about 10.000000 seconds 00:12:29.168 00:12:29.168 Latency(us) 00:12:29.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.168 =================================================================================================================== 00:12:29.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69542' 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 69542 00:12:29.168 21:11:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 69542 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.168 rmmod nvme_tcp 00:12:29.168 rmmod nvme_fabrics 00:12:29.168 rmmod nvme_keyring 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69504 ']' 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69504 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 69504 ']' 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 69504 00:12:29.168 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69504 00:12:29.169 killing process with pid 69504 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69504' 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 69504 00:12:29.169 21:11:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 69504 00:12:29.736 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.736 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.736 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.736 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.736 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.736 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.737 21:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.737 21:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.737 21:11:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:29.737 ************************************ 00:12:29.737 END TEST nvmf_queue_depth 00:12:29.737 ************************************ 00:12:29.737 00:12:29.737 real 0m15.600s 00:12:29.737 user 0m26.521s 00:12:29.737 sys 0m2.168s 00:12:29.737 21:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.737 21:11:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.737 21:11:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.737 21:11:41 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:29.737 21:11:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.737 21:11:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.737 21:11:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.737 ************************************ 00:12:29.737 START TEST nvmf_target_multipath 00:12:29.737 ************************************ 00:12:29.737 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:29.996 * Looking for test storage... 00:12:29.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.996 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:29.997 Cannot find device "nvmf_tgt_br" 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.997 Cannot find device "nvmf_tgt_br2" 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:29.997 Cannot find device "nvmf_tgt_br" 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:29.997 Cannot find device "nvmf_tgt_br2" 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:29.997 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:30.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:30.256 00:12:30.256 --- 10.0.0.2 ping statistics --- 00:12:30.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.256 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:30.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:30.256 00:12:30.256 --- 10.0.0.3 ping statistics --- 00:12:30.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.256 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:30.256 00:12:30.256 --- 10.0.0.1 ping statistics --- 00:12:30.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.256 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69883 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69883 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 69883 ']' 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.256 21:11:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:30.515 [2024-07-14 21:11:41.841541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:30.515 [2024-07-14 21:11:41.841719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.515 [2024-07-14 21:11:42.021115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.774 [2024-07-14 21:11:42.263257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.774 [2024-07-14 21:11:42.263330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.774 [2024-07-14 21:11:42.263361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.774 [2024-07-14 21:11:42.263375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.774 [2024-07-14 21:11:42.263388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.774 [2024-07-14 21:11:42.263689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.774 [2024-07-14 21:11:42.264289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.774 [2024-07-14 21:11:42.264407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.774 [2024-07-14 21:11:42.264419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.032 [2024-07-14 21:11:42.449810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:31.289 21:11:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.289 21:11:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:12:31.289 21:11:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.290 21:11:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.290 21:11:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:31.290 21:11:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.290 21:11:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:31.547 [2024-07-14 21:11:43.046451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.547 21:11:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:32.119 Malloc0 00:12:32.119 21:11:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:32.119 21:11:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.376 21:11:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.634 [2024-07-14 21:11:44.133498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.634 21:11:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:32.891 [2024-07-14 21:11:44.366025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:32.891 21:11:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:12:33.149 21:11:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:33.149 21:11:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.149 21:11:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.149 21:11:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.149 21:11:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:33.149 21:11:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:35.678 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:35.679 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:35.679 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:35.679 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:35.679 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:35.679 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69977 00:12:35.679 21:11:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:35.679 [global] 00:12:35.679 thread=1 00:12:35.679 invalidate=1 00:12:35.679 rw=randrw 00:12:35.679 time_based=1 00:12:35.679 runtime=6 00:12:35.679 ioengine=libaio 00:12:35.679 direct=1 00:12:35.679 bs=4096 00:12:35.679 iodepth=128 00:12:35.679 norandommap=0 00:12:35.679 numjobs=1 00:12:35.679 00:12:35.679 verify_dump=1 00:12:35.679 verify_backlog=512 00:12:35.679 verify_state_save=0 00:12:35.679 do_verify=1 00:12:35.679 verify=crc32c-intel 00:12:35.679 [job0] 00:12:35.679 filename=/dev/nvme0n1 00:12:35.679 Could not set queue depth (nvme0n1) 00:12:35.679 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.679 fio-3.35 00:12:35.679 Starting 1 thread 00:12:36.246 21:11:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:36.505 21:11:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:36.764 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:37.329 21:11:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69977 00:12:41.510 00:12:41.510 job0: (groupid=0, jobs=1): err= 0: pid=70001: Sun Jul 14 21:11:53 2024 00:12:41.510 read: IOPS=8385, BW=32.8MiB/s (34.3MB/s)(197MiB/6003msec) 00:12:41.510 slat (usec): min=6, max=7785, avg=72.64, stdev=287.76 00:12:41.510 clat (usec): min=1453, max=18830, avg=10508.29, stdev=1770.13 00:12:41.510 lat (usec): min=2222, max=18840, avg=10580.93, stdev=1773.13 00:12:41.510 clat percentiles (usec): 00:12:41.510 | 1.00th=[ 5473], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9634], 00:12:41.510 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:12:41.510 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[14746], 00:12:41.510 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17433], 99.95th=[17695], 00:12:41.510 | 99.99th=[17957] 00:12:41.510 bw ( KiB/s): min= 8480, max=23272, per=54.82%, avg=18389.09, stdev=3716.02, samples=11 00:12:41.510 iops : min= 2120, max= 5818, avg=4597.27, stdev=929.00, samples=11 00:12:41.510 write: IOPS=4889, BW=19.1MiB/s (20.0MB/s)(99.0MiB/5181msec); 0 zone resets 00:12:41.510 slat (usec): min=14, max=4891, avg=80.90, stdev=210.41 00:12:41.510 clat (usec): min=1723, max=18775, avg=9182.08, stdev=1644.56 00:12:41.510 lat (usec): min=1762, max=18812, avg=9262.98, stdev=1649.30 00:12:41.510 clat percentiles (usec): 00:12:41.510 | 1.00th=[ 4146], 5.00th=[ 5473], 10.00th=[ 7177], 20.00th=[ 8586], 00:12:41.510 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:12:41.510 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:12:41.510 | 99.00th=[14353], 99.50th=[15139], 99.90th=[16319], 99.95th=[16712], 00:12:41.510 | 99.99th=[17957] 00:12:41.510 bw ( KiB/s): min= 8752, max=22768, per=93.98%, avg=18382.55, stdev=3503.96, samples=11 00:12:41.510 iops : min= 2188, max= 5692, avg=4595.64, stdev=875.99, samples=11 00:12:41.510 lat (msec) : 2=0.01%, 4=0.36%, 10=45.17%, 20=54.47% 00:12:41.510 cpu : usr=4.57%, sys=19.64%, ctx=4371, majf=0, minf=121 00:12:41.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:41.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.510 issued rwts: total=50339,25334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.510 00:12:41.510 Run status group 0 (all jobs): 00:12:41.510 READ: bw=32.8MiB/s (34.3MB/s), 32.8MiB/s-32.8MiB/s (34.3MB/s-34.3MB/s), io=197MiB (206MB), run=6003-6003msec 00:12:41.510 WRITE: bw=19.1MiB/s (20.0MB/s), 19.1MiB/s-19.1MiB/s (20.0MB/s-20.0MB/s), io=99.0MiB (104MB), run=5181-5181msec 00:12:41.510 00:12:41.510 Disk stats (read/write): 00:12:41.510 nvme0n1: ios=49129/25334, merge=0/0, ticks=497060/218874, in_queue=715934, util=98.70% 00:12:41.510 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:12:41.769 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70073 00:12:42.027 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:42.028 21:11:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:42.028 [global] 00:12:42.028 thread=1 00:12:42.028 invalidate=1 00:12:42.028 rw=randrw 00:12:42.028 time_based=1 00:12:42.028 runtime=6 00:12:42.028 ioengine=libaio 00:12:42.028 direct=1 00:12:42.028 bs=4096 00:12:42.028 iodepth=128 00:12:42.028 norandommap=0 00:12:42.028 numjobs=1 00:12:42.028 00:12:42.028 verify_dump=1 00:12:42.028 verify_backlog=512 00:12:42.028 verify_state_save=0 00:12:42.028 do_verify=1 00:12:42.028 verify=crc32c-intel 00:12:42.028 [job0] 00:12:42.028 filename=/dev/nvme0n1 00:12:42.286 Could not set queue depth (nvme0n1) 00:12:42.286 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:42.286 fio-3.35 00:12:42.286 Starting 1 thread 00:12:43.221 21:11:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:43.478 21:11:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:43.737 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:43.995 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:44.303 21:11:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70073 00:12:48.497 00:12:48.497 job0: (groupid=0, jobs=1): err= 0: pid=70101: Sun Jul 14 21:11:59 2024 00:12:48.497 read: IOPS=9307, BW=36.4MiB/s (38.1MB/s)(218MiB/6004msec) 00:12:48.497 slat (usec): min=4, max=7770, avg=57.24, stdev=244.31 00:12:48.497 clat (usec): min=387, max=19042, avg=9606.99, stdev=2448.26 00:12:48.497 lat (usec): min=398, max=19059, avg=9664.23, stdev=2467.68 00:12:48.497 clat percentiles (usec): 00:12:48.497 | 1.00th=[ 3654], 5.00th=[ 5211], 10.00th=[ 6063], 20.00th=[ 7308], 00:12:48.497 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10421], 00:12:48.497 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11600], 95.00th=[13435], 00:12:48.497 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:12:48.497 | 99.99th=[18220] 00:12:48.497 bw ( KiB/s): min= 5880, max=33256, per=51.44%, avg=19151.64, stdev=8309.20, samples=11 00:12:48.497 iops : min= 1470, max= 8314, avg=4787.91, stdev=2077.30, samples=11 00:12:48.497 write: IOPS=5655, BW=22.1MiB/s (23.2MB/s)(112MiB/5069msec); 0 zone resets 00:12:48.497 slat (usec): min=15, max=2181, avg=64.01, stdev=172.29 00:12:48.497 clat (usec): min=493, max=18050, avg=7855.56, stdev=2500.85 00:12:48.497 lat (usec): min=552, max=18103, avg=7919.57, stdev=2522.56 00:12:48.497 clat percentiles (usec): 00:12:48.497 | 1.00th=[ 2868], 5.00th=[ 3785], 10.00th=[ 4359], 20.00th=[ 5145], 00:12:48.497 | 30.00th=[ 5932], 40.00th=[ 7373], 50.00th=[ 8848], 60.00th=[ 9241], 00:12:48.497 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:12:48.497 | 99.00th=[13698], 99.50th=[14484], 99.90th=[16188], 99.95th=[16909], 00:12:48.497 | 99.99th=[17957] 00:12:48.497 bw ( KiB/s): min= 6176, max=33104, per=84.75%, avg=19174.27, stdev=8304.71, samples=11 00:12:48.497 iops : min= 1544, max= 8276, avg=4793.55, stdev=2076.18, samples=11 00:12:48.497 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:12:48.497 lat (msec) : 2=0.25%, 4=2.92%, 10=52.65%, 20=44.15% 00:12:48.497 cpu : usr=5.33%, sys=20.97%, ctx=4793, majf=0, minf=114 00:12:48.497 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:48.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:48.497 issued rwts: total=55884,28669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.497 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:48.497 00:12:48.497 Run status group 0 (all jobs): 00:12:48.497 READ: bw=36.4MiB/s (38.1MB/s), 36.4MiB/s-36.4MiB/s (38.1MB/s-38.1MB/s), io=218MiB (229MB), run=6004-6004msec 00:12:48.497 WRITE: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=112MiB (117MB), run=5069-5069msec 00:12:48.497 00:12:48.497 Disk stats (read/write): 00:12:48.497 nvme0n1: ios=55270/28157, merge=0/0, ticks=509835/207489, in_queue=717324, util=98.70% 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:12:48.497 21:11:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.755 rmmod nvme_tcp 00:12:48.755 rmmod nvme_fabrics 00:12:48.755 rmmod nvme_keyring 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.755 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69883 ']' 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69883 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 69883 ']' 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 69883 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:12:48.756 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:49.014 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69883 00:12:49.014 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:49.014 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:49.014 killing process with pid 69883 00:12:49.014 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69883' 00:12:49.014 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 69883 00:12:49.014 21:12:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 69883 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:50.389 00:12:50.389 real 0m20.419s 00:12:50.389 user 1m14.664s 00:12:50.389 sys 0m9.339s 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.389 21:12:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:50.389 ************************************ 00:12:50.389 END TEST nvmf_target_multipath 00:12:50.389 ************************************ 00:12:50.389 21:12:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:50.389 21:12:01 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:50.389 21:12:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:50.389 21:12:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.389 21:12:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:50.389 ************************************ 00:12:50.389 START TEST nvmf_zcopy 00:12:50.389 ************************************ 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:50.389 * Looking for test storage... 00:12:50.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:50.389 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:50.390 Cannot find device "nvmf_tgt_br" 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.390 Cannot find device "nvmf_tgt_br2" 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:50.390 Cannot find device "nvmf_tgt_br" 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:50.390 Cannot find device "nvmf_tgt_br2" 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:12:50.390 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.648 21:12:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:50.648 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:50.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:50.649 00:12:50.649 --- 10.0.0.2 ping statistics --- 00:12:50.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.649 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:50.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:50.649 00:12:50.649 --- 10.0.0.3 ping statistics --- 00:12:50.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.649 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:50.649 00:12:50.649 --- 10.0.0.1 ping statistics --- 00:12:50.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.649 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70356 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70356 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 70356 ']' 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.649 21:12:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 [2024-07-14 21:12:02.298119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:50.907 [2024-07-14 21:12:02.298284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.166 [2024-07-14 21:12:02.474368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.166 [2024-07-14 21:12:02.712844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.166 [2024-07-14 21:12:02.712921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.166 [2024-07-14 21:12:02.712962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.166 [2024-07-14 21:12:02.712991] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.166 [2024-07-14 21:12:02.713013] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.166 [2024-07-14 21:12:02.713070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.424 [2024-07-14 21:12:02.909431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.683 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.940 [2024-07-14 21:12:03.237574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.940 [2024-07-14 21:12:03.253701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.940 malloc0 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.940 { 00:12:51.940 "params": { 00:12:51.940 "name": "Nvme$subsystem", 00:12:51.940 "trtype": "$TEST_TRANSPORT", 00:12:51.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.940 "adrfam": "ipv4", 00:12:51.940 "trsvcid": "$NVMF_PORT", 00:12:51.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.940 "hdgst": ${hdgst:-false}, 00:12:51.940 "ddgst": ${ddgst:-false} 00:12:51.940 }, 00:12:51.940 "method": "bdev_nvme_attach_controller" 00:12:51.940 } 00:12:51.940 EOF 00:12:51.940 )") 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:51.940 21:12:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.940 "params": { 00:12:51.940 "name": "Nvme1", 00:12:51.940 "trtype": "tcp", 00:12:51.940 "traddr": "10.0.0.2", 00:12:51.940 "adrfam": "ipv4", 00:12:51.940 "trsvcid": "4420", 00:12:51.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.940 "hdgst": false, 00:12:51.940 "ddgst": false 00:12:51.940 }, 00:12:51.940 "method": "bdev_nvme_attach_controller" 00:12:51.940 }' 00:12:51.940 [2024-07-14 21:12:03.402598] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:51.940 [2024-07-14 21:12:03.402743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70389 ] 00:12:52.197 [2024-07-14 21:12:03.565923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.456 [2024-07-14 21:12:03.781886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.456 [2024-07-14 21:12:03.964437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:52.714 Running I/O for 10 seconds... 00:13:02.693 00:13:02.693 Latency(us) 00:13:02.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:02.693 Verification LBA range: start 0x0 length 0x1000 00:13:02.693 Nvme1n1 : 10.02 5279.39 41.25 0.00 0.00 24178.74 2815.07 36223.53 00:13:02.693 =================================================================================================================== 00:13:02.693 Total : 5279.39 41.25 0.00 0.00 24178.74 2815.07 36223.53 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70517 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:03.628 { 00:13:03.628 "params": { 00:13:03.628 "name": "Nvme$subsystem", 00:13:03.628 "trtype": "$TEST_TRANSPORT", 00:13:03.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.628 "adrfam": "ipv4", 00:13:03.628 "trsvcid": "$NVMF_PORT", 00:13:03.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.628 "hdgst": ${hdgst:-false}, 00:13:03.628 "ddgst": ${ddgst:-false} 00:13:03.628 }, 00:13:03.628 "method": "bdev_nvme_attach_controller" 00:13:03.628 } 00:13:03.628 EOF 00:13:03.628 )") 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:03.628 [2024-07-14 21:12:15.005999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.006068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:03.628 21:12:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:03.628 "params": { 00:13:03.628 "name": "Nvme1", 00:13:03.628 "trtype": "tcp", 00:13:03.628 "traddr": "10.0.0.2", 00:13:03.628 "adrfam": "ipv4", 00:13:03.628 "trsvcid": "4420", 00:13:03.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.628 "hdgst": false, 00:13:03.628 "ddgst": false 00:13:03.628 }, 00:13:03.628 "method": "bdev_nvme_attach_controller" 00:13:03.628 }' 00:13:03.628 [2024-07-14 21:12:15.017935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.017976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.029909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.029958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.041925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.041962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.053911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.053951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.065942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.065978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.077941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.077995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.089995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.090031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.101943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.101990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.103568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:03.628 [2024-07-14 21:12:15.103710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70517 ] 00:13:03.628 [2024-07-14 21:12:15.113967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.114005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.125945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.125988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.137962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.138000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.149954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.150009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.161952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.161987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.628 [2024-07-14 21:12:15.174004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.628 [2024-07-14 21:12:15.174071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.185979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.186014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.197964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.198000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.210001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.210037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.221972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.222025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.233992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.234026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.245997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.246048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.257988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.258022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.270008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.270045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.274774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.888 [2024-07-14 21:12:15.282063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.282108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.294005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.294041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.306038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.306085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.318011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.318067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.330027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.330061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.342038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.342075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.354045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.354081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.366062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.366138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.378054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.378089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.390042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.390095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.402091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.402140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.414090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.414159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.426087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.888 [2024-07-14 21:12:15.426136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.888 [2024-07-14 21:12:15.432230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.147 [2024-07-14 21:12:15.438095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.438148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.450115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.450180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.462088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.462140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.474092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.474156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.486074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.486126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.498190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.498237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.510146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.510212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.522112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.522191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.534116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.534152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.546093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.546141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.558115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.558183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.147 [2024-07-14 21:12:15.570131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.147 [2024-07-14 21:12:15.570165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.582122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.582158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.594136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.594169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.594931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:04.148 [2024-07-14 21:12:15.606188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.606242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.618152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.618185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.630164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.630201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.642158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.642191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.654219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.654271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.666162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.666195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.678147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.678184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.148 [2024-07-14 21:12:15.690202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.148 [2024-07-14 21:12:15.690243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.702190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.702227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.714217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.714274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.726310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.726354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.738316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.738391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.750270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.750312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 Running I/O for 5 seconds... 00:13:04.407 [2024-07-14 21:12:15.762292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.762349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.780942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.781003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.793824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.793895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.810959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.811021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.826935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.826976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.837989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.838048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.853348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.853387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.868453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.868496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.878885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.878925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.894195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.894254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.911432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.911471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.926857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.926916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.937526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.937566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.407 [2024-07-14 21:12:15.951484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.407 [2024-07-14 21:12:15.951546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:15.967574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:15.967615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:15.985513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:15.985572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.002274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.002314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.017715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.017809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.041851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.041893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.057436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.057499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.074232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.074275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.090135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.090195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.101044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.101086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.118040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.118083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.134433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.134473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.146421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.146479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.163922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.163965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.178413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.178470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.195396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.195436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.666 [2024-07-14 21:12:16.211490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.666 [2024-07-14 21:12:16.211533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.228811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.228860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.245203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.245260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.262076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.262115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.278163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.278228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.294613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.294652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.311187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.311246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.321627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.321665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.336553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.336594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.353761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.353830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.370198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.370239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.386922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.386963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.403408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.403467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.420119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.420187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.436610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.436667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.454211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.454251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.925 [2024-07-14 21:12:16.470615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.925 [2024-07-14 21:12:16.470689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.485119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.485174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.500359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.500417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.510229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.510285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.526847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.526905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.542425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.542464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.557590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.557652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.573541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.573581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.591186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.591243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.606598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.606637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.617020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.617079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.632776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.632859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.647061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.647121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.662756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.662822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.672895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.672952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.688031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.688072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.698896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.698954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.714203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.714243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.184 [2024-07-14 21:12:16.731334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.184 [2024-07-14 21:12:16.731393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.744403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.744463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.761305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.761365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.777991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.778032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.793398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.793460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.803794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.803835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.820609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.820668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.836977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.837017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.853463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.853523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.869482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.463 [2024-07-14 21:12:16.869521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.463 [2024-07-14 21:12:16.880389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.880432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.896036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.896091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.911069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.911127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.927585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.927624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.944796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.944883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.961015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.961065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.977631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.977691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.464 [2024-07-14 21:12:16.993383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.464 [2024-07-14 21:12:16.993426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.767 [2024-07-14 21:12:17.005365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.767 [2024-07-14 21:12:17.005443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.767 [2024-07-14 21:12:17.021442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.767 [2024-07-14 21:12:17.021485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.036679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.036739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.052356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.052396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.071016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.071076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.086705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.086743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.105558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.105602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.121099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.121186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.132426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.132470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.148678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.148719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.162017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.162065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.178722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.178810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.194451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.194509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.211488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.211529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.227746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.227819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.244868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.244910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.261616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.261669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.277053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.277109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.292380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.292428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.768 [2024-07-14 21:12:17.309423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.768 [2024-07-14 21:12:17.309463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.324382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.324437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.339688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.339781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.350080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.350135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.365359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.365398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.382652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.382691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.398864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.398903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.415674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.415713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.432298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.432337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.447819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.447862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.458315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.458354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.473843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.473883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.489392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.489432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.505357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.505397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.515834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.515875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.531918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.531961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.547537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.547576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.027 [2024-07-14 21:12:17.558158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.027 [2024-07-14 21:12:17.558197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.577177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.577219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.589187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.589226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.604434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.604474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.621284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.621323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.637371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.637411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.656458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.656498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.670697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.670736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.686497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.686536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.703564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.703604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.720233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.720272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.737746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.737829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.751477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.751517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.767640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.767680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.785833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.785871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.800500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.800539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.816562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.816602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.287 [2024-07-14 21:12:17.833899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.287 [2024-07-14 21:12:17.833981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.546 [2024-07-14 21:12:17.850042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.546 [2024-07-14 21:12:17.850100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.546 [2024-07-14 21:12:17.865786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.546 [2024-07-14 21:12:17.865867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.546 [2024-07-14 21:12:17.881847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.546 [2024-07-14 21:12:17.881887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.546 [2024-07-14 21:12:17.900253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.546 [2024-07-14 21:12:17.900294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.546 [2024-07-14 21:12:17.914616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.546 [2024-07-14 21:12:17.914655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:17.930643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:17.930683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:17.947114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:17.947170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:17.964850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:17.964887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:17.981295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:17.981334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:17.997844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:17.997884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:18.013729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:18.013795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:18.024580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:18.024620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:18.039631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:18.039670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:18.056142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:18.056198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:18.073094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:18.073135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.547 [2024-07-14 21:12:18.089186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.547 [2024-07-14 21:12:18.089225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.105465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.806 [2024-07-14 21:12:18.105505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.116622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.806 [2024-07-14 21:12:18.116662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.131752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.806 [2024-07-14 21:12:18.131820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.147677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.806 [2024-07-14 21:12:18.147742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.159219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.806 [2024-07-14 21:12:18.159262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.172617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.806 [2024-07-14 21:12:18.172656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.806 [2024-07-14 21:12:18.188879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.188920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.205039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.205079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.214905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.214945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.229862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.229902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.245312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.245362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.261230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.261270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.279915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.279957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.294020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.294063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.309028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.309073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.321279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.321324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.807 [2024-07-14 21:12:18.339862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.807 [2024-07-14 21:12:18.339911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.357010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.357082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.373495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.373538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.390990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.391061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.406307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.406349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.423606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.423653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.439058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.439100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.454313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.454354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.470111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.470166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.487864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.487906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.500390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.500430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.516145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.516186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.533123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.533165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.549499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.549539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.567447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.567488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.583192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.583232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.066 [2024-07-14 21:12:18.599673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.066 [2024-07-14 21:12:18.599713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.619415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.619455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.634552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.634591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.644746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.644843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.661387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.661426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.676546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.676585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.692748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.692830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.709270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.709310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.725561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.725601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.742216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.742256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.758845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.758886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.775187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.775227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.792291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.792330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.808952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.808993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.825170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.825210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.841877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.841917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.326 [2024-07-14 21:12:18.858508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.326 [2024-07-14 21:12:18.858579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.876862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.876951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.893112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.893170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.909412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.909452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.926643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.926683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.942101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.942156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.957822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.957862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.968049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.968106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:18.983541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:18.983581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.002051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.002092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.016019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.016060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.031616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.031657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.048224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.048263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.065058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.065098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.082299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.082340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.098651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.098691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.117251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.117290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.586 [2024-07-14 21:12:19.133635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.586 [2024-07-14 21:12:19.133676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.149458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.149497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.165942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.165982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.183111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.183183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.199088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.199161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.209914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.209956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.226124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.226180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.241994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.242033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.260643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.260682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.274592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.274642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.290046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.290087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.306799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.306838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.323418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.323459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.342354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.342393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.356269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.356307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.372684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.372724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.846 [2024-07-14 21:12:19.388934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.846 [2024-07-14 21:12:19.388990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.405368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.405408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.415117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.415171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.431432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.431473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.447322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.447365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.463058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.463135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.476844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.476916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.493908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.493951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.509443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.509484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.526940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.526980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.541875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.541949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.557208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.557249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.572964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.573006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.589314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.589354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.599153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.599192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.615174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.615213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.632718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.632783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.105 [2024-07-14 21:12:19.646938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.105 [2024-07-14 21:12:19.646978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.663587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.663626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.680391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.680430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.697369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.697409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.713655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.713694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.732722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.732787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.747750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.747818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.764281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.764320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.780036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.780078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.789941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.789980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.806607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.806646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.821959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.821999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.832239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.832278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.847988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.848030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.861776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.861827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.877568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.877607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.888531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.888602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.365 [2024-07-14 21:12:19.905811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.365 [2024-07-14 21:12:19.905861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:19.921932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:19.921971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:19.936642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:19.936680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:19.952668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:19.952708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:19.969828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:19.969868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:19.986313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:19.986352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.002894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.002956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.019202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.019246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.035386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.035428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.051122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.051177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.070233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.070273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.085770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.085819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.101398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.101437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.116372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.116411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.132957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.132997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.149042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.149080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.624 [2024-07-14 21:12:20.167250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.624 [2024-07-14 21:12:20.167290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.882 [2024-07-14 21:12:20.182238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.882 [2024-07-14 21:12:20.182278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.882 [2024-07-14 21:12:20.197060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.882 [2024-07-14 21:12:20.197105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.882 [2024-07-14 21:12:20.212577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.882 [2024-07-14 21:12:20.212618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.882 [2024-07-14 21:12:20.228585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.882 [2024-07-14 21:12:20.228624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.882 [2024-07-14 21:12:20.245334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.882 [2024-07-14 21:12:20.245373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.882 [2024-07-14 21:12:20.261658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.882 [2024-07-14 21:12:20.261697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.279533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.279584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.294912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.294952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.305091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.305161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.320866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.320905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.336885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.336925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.352032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.352104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.368904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.368945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.384681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.384720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.401053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.401092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.883 [2024-07-14 21:12:20.419619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.883 [2024-07-14 21:12:20.419660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.141 [2024-07-14 21:12:20.435135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.435175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.450904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.450945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.461792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.461830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.477427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.477466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.492157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.492197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.508430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.508470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.524913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.524953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.542331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.542370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.558439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.558478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.568987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.569030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.586819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.586880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.601266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.601310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.618302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.618363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.634304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.634347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.645656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.645698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.661719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.661805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.142 [2024-07-14 21:12:20.678012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.142 [2024-07-14 21:12:20.678055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.693727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.693815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.709982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.710023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.730108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.730147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.744509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.744548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.760332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.760371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.772349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.772386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 00:13:09.401 Latency(us) 00:13:09.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.401 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:09.401 Nvme1n1 : 5.01 10488.71 81.94 0.00 0.00 12188.96 5362.04 23235.49 00:13:09.401 =================================================================================================================== 00:13:09.401 Total : 10488.71 81.94 0.00 0.00 12188.96 5362.04 23235.49 00:13:09.401 [2024-07-14 21:12:20.784350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.784387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.796353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.796389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.808407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.808460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.820381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.820581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.832367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.832405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.844369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.844403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.856371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.856407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.868427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.868478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.880390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.880425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.892365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.892398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.904400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.904438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.916415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.916454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.928386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.928424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.401 [2024-07-14 21:12:20.940410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.401 [2024-07-14 21:12:20.940446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:20.952396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:20.952432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:20.964377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:20.964411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:20.976399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:20.976432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:20.988418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:20.988464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.000420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.000454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.012413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.012447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.024404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.024439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.036509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.036567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.048462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.048504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.060407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.060440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.072431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.072464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.084433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.084467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.096434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.096468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.108437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.108470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.120429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.120464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.132542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.132598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.144446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.144480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.156437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.156478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.168452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.168485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.180454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.180630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.192467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.192624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.661 [2024-07-14 21:12:21.204501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.661 [2024-07-14 21:12:21.204554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.216475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.216511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.228502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.228538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.240489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.240523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.252476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.252509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.264493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.264527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.276487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.276523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.288503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.288538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.300514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.300548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.312499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.312532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.324516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.324549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.336578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.336622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.348515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.348549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.360532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.360565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.372553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.372586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.384564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.384755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.396568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.396604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.408553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.408587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.420575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.420609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.432644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.432703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.444561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.444595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.456576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.456610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.921 [2024-07-14 21:12:21.468616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.921 [2024-07-14 21:12:21.468653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.480596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.480630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.492635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.492668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.504620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.504663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.516627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.516661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.528632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.528667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.540608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.540642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.552616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.552651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.564606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.564639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.584658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.584704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.596628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.596662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.608616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.608649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.620676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.620720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.632666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.632702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.644630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.644663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.656651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.656684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.668654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.668688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.680658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.680691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.692665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.692700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.704689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.704725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.716702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.716742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.181 [2024-07-14 21:12:21.728687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.181 [2024-07-14 21:12:21.728734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.441 [2024-07-14 21:12:21.740707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.441 [2024-07-14 21:12:21.740759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.441 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70517) - No such process 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70517 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:10.441 delay0 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.441 21:12:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:10.700 [2024-07-14 21:12:22.000930] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:17.260 Initializing NVMe Controllers 00:13:17.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:17.260 Initialization complete. Launching workers. 00:13:17.260 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 115 00:13:17.261 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 33 00:13:17.261 success 268, unsuccess 134, failed 0 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.261 rmmod nvme_tcp 00:13:17.261 rmmod nvme_fabrics 00:13:17.261 rmmod nvme_keyring 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70356 ']' 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70356 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 70356 ']' 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 70356 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70356 00:13:17.261 killing process with pid 70356 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70356' 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 70356 00:13:17.261 21:12:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 70356 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.827 21:12:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:17.827 00:13:17.827 real 0m27.564s 00:13:17.827 user 0m45.652s 00:13:17.827 sys 0m6.844s 00:13:17.827 ************************************ 00:13:17.828 END TEST nvmf_zcopy 00:13:17.828 ************************************ 00:13:17.828 21:12:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.828 21:12:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.828 21:12:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.828 21:12:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:17.828 21:12:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.828 21:12:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.828 21:12:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.828 ************************************ 00:13:17.828 START TEST nvmf_nmic 00:13:17.828 ************************************ 00:13:17.828 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:18.087 * Looking for test storage... 00:13:18.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.087 21:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:18.088 Cannot find device "nvmf_tgt_br" 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.088 Cannot find device "nvmf_tgt_br2" 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:18.088 Cannot find device "nvmf_tgt_br" 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:18.088 Cannot find device "nvmf_tgt_br2" 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:18.088 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:18.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:18.348 00:13:18.348 --- 10.0.0.2 ping statistics --- 00:13:18.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.348 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:18.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:18.348 00:13:18.348 --- 10.0.0.3 ping statistics --- 00:13:18.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.348 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:18.348 00:13:18.348 --- 10.0.0.1 ping statistics --- 00:13:18.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.348 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70866 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70866 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 70866 ']' 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.348 21:12:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.607 [2024-07-14 21:12:29.907907] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:18.607 [2024-07-14 21:12:29.908059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.607 [2024-07-14 21:12:30.081028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.866 [2024-07-14 21:12:30.245654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.866 [2024-07-14 21:12:30.245734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.866 [2024-07-14 21:12:30.245750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.866 [2024-07-14 21:12:30.245780] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.866 [2024-07-14 21:12:30.245832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.866 [2024-07-14 21:12:30.246572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.866 [2024-07-14 21:12:30.246736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.866 [2024-07-14 21:12:30.246841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.866 [2024-07-14 21:12:30.246907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.866 [2024-07-14 21:12:30.414732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 [2024-07-14 21:12:30.864276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 Malloc0 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 [2024-07-14 21:12:30.967866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.435 test case1: single bdev can't be used in multiple subsystems 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.435 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.695 [2024-07-14 21:12:30.991594] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:19.695 [2024-07-14 21:12:30.991647] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:19.695 [2024-07-14 21:12:30.991665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.695 request: 00:13:19.695 { 00:13:19.695 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:19.695 "namespace": { 00:13:19.695 "bdev_name": "Malloc0", 00:13:19.695 "no_auto_visible": false 00:13:19.695 }, 00:13:19.695 "method": "nvmf_subsystem_add_ns", 00:13:19.695 "req_id": 1 00:13:19.695 } 00:13:19.695 Got JSON-RPC error response 00:13:19.695 response: 00:13:19.695 { 00:13:19.695 "code": -32602, 00:13:19.695 "message": "Invalid parameters" 00:13:19.695 } 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:19.695 Adding namespace failed - expected result. 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:19.695 test case2: host connect to nvmf target in multiple paths 00:13:19.695 21:12:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:19.695 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.695 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.695 [2024-07-14 21:12:31.003734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:19.695 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.695 21:12:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.695 21:12:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:19.954 21:12:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.954 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.954 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.954 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:19.954 21:12:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:21.858 21:12:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:21.858 [global] 00:13:21.858 thread=1 00:13:21.858 invalidate=1 00:13:21.858 rw=write 00:13:21.858 time_based=1 00:13:21.858 runtime=1 00:13:21.858 ioengine=libaio 00:13:21.858 direct=1 00:13:21.858 bs=4096 00:13:21.858 iodepth=1 00:13:21.858 norandommap=0 00:13:21.858 numjobs=1 00:13:21.858 00:13:21.858 verify_dump=1 00:13:21.858 verify_backlog=512 00:13:21.858 verify_state_save=0 00:13:21.858 do_verify=1 00:13:21.858 verify=crc32c-intel 00:13:21.858 [job0] 00:13:21.858 filename=/dev/nvme0n1 00:13:21.858 Could not set queue depth (nvme0n1) 00:13:22.115 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.115 fio-3.35 00:13:22.115 Starting 1 thread 00:13:23.050 00:13:23.050 job0: (groupid=0, jobs=1): err= 0: pid=70952: Sun Jul 14 21:12:34 2024 00:13:23.050 read: IOPS=2332, BW=9331KiB/s (9555kB/s)(9340KiB/1001msec) 00:13:23.050 slat (nsec): min=11513, max=62696, avg=16900.40, stdev=5832.93 00:13:23.050 clat (usec): min=170, max=1310, avg=225.21, stdev=49.43 00:13:23.050 lat (usec): min=184, max=1323, avg=242.12, stdev=50.49 00:13:23.050 clat percentiles (usec): 00:13:23.050 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:13:23.050 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:13:23.050 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 273], 00:13:23.050 | 99.00th=[ 338], 99.50th=[ 404], 99.90th=[ 1012], 99.95th=[ 1188], 00:13:23.050 | 99.99th=[ 1303] 00:13:23.050 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:23.050 slat (usec): min=16, max=126, avg=24.87, stdev= 8.30 00:13:23.050 clat (usec): min=110, max=301, avg=140.95, stdev=20.13 00:13:23.050 lat (usec): min=129, max=349, avg=165.82, stdev=23.14 00:13:23.050 clat percentiles (usec): 00:13:23.050 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:13:23.050 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 143], 00:13:23.050 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:13:23.050 | 99.00th=[ 196], 99.50th=[ 208], 99.90th=[ 233], 99.95th=[ 235], 00:13:23.050 | 99.99th=[ 302] 00:13:23.050 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:13:23.050 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:23.050 lat (usec) : 250=92.97%, 500=6.88%, 750=0.06%, 1000=0.02% 00:13:23.050 lat (msec) : 2=0.06% 00:13:23.050 cpu : usr=1.90%, sys=8.10%, ctx=4896, majf=0, minf=2 00:13:23.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.050 issued rwts: total=2335,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.050 00:13:23.050 Run status group 0 (all jobs): 00:13:23.050 READ: bw=9331KiB/s (9555kB/s), 9331KiB/s-9331KiB/s (9555kB/s-9555kB/s), io=9340KiB (9564kB), run=1001-1001msec 00:13:23.050 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:13:23.050 00:13:23.050 Disk stats (read/write): 00:13:23.050 nvme0n1: ios=2098/2420, merge=0/0, ticks=489/390, in_queue=879, util=91.48% 00:13:23.050 21:12:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.309 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.310 rmmod nvme_tcp 00:13:23.310 rmmod nvme_fabrics 00:13:23.310 rmmod nvme_keyring 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70866 ']' 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70866 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 70866 ']' 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 70866 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70866 00:13:23.310 killing process with pid 70866 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70866' 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 70866 00:13:23.310 21:12:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 70866 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:24.690 ************************************ 00:13:24.690 END TEST nvmf_nmic 00:13:24.690 ************************************ 00:13:24.690 00:13:24.690 real 0m6.666s 00:13:24.690 user 0m20.437s 00:13:24.690 sys 0m2.287s 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.690 21:12:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:24.690 21:12:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.690 21:12:36 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:24.690 21:12:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.690 21:12:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.690 21:12:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.690 ************************************ 00:13:24.690 START TEST nvmf_fio_target 00:13:24.690 ************************************ 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:24.690 * Looking for test storage... 00:13:24.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:24.690 Cannot find device "nvmf_tgt_br" 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:24.690 Cannot find device "nvmf_tgt_br2" 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:24.690 Cannot find device "nvmf_tgt_br" 00:13:24.690 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:13:24.691 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:24.691 Cannot find device "nvmf_tgt_br2" 00:13:24.691 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:13:24.691 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:24.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:24.950 00:13:24.950 --- 10.0.0.2 ping statistics --- 00:13:24.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.950 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:24.950 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:24.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:24.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:13:24.950 00:13:24.950 --- 10.0.0.3 ping statistics --- 00:13:24.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.950 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:25.209 00:13:25.209 --- 10.0.0.1 ping statistics --- 00:13:25.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.209 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.209 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71142 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71142 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 71142 ']' 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.210 21:12:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.210 [2024-07-14 21:12:36.631270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:25.210 [2024-07-14 21:12:36.631403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.469 [2024-07-14 21:12:36.789145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.469 [2024-07-14 21:12:36.946303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.469 [2024-07-14 21:12:36.946382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.469 [2024-07-14 21:12:36.946399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.469 [2024-07-14 21:12:36.946412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.469 [2024-07-14 21:12:36.946426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.469 [2024-07-14 21:12:36.946640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.469 [2024-07-14 21:12:36.946926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.469 [2024-07-14 21:12:36.947743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.469 [2024-07-14 21:12:36.947808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.728 [2024-07-14 21:12:37.110954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.988 21:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:26.247 [2024-07-14 21:12:37.752734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.247 21:12:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:26.813 21:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:26.813 21:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.071 21:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:27.071 21:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.371 21:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:27.371 21:12:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:27.653 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:27.653 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:27.913 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.171 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:28.171 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.429 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:28.429 21:12:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.688 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:28.688 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:28.947 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:29.206 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:29.206 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:29.465 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:29.465 21:12:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.724 21:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.984 [2024-07-14 21:12:41.392898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.984 21:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:30.243 21:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:30.503 21:12:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:33.052 21:12:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:33.052 21:12:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:33.052 21:12:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.052 21:12:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:33.052 21:12:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.052 21:12:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:33.052 21:12:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:33.052 [global] 00:13:33.052 thread=1 00:13:33.052 invalidate=1 00:13:33.052 rw=write 00:13:33.052 time_based=1 00:13:33.052 runtime=1 00:13:33.052 ioengine=libaio 00:13:33.052 direct=1 00:13:33.052 bs=4096 00:13:33.052 iodepth=1 00:13:33.052 norandommap=0 00:13:33.052 numjobs=1 00:13:33.052 00:13:33.052 verify_dump=1 00:13:33.052 verify_backlog=512 00:13:33.052 verify_state_save=0 00:13:33.052 do_verify=1 00:13:33.052 verify=crc32c-intel 00:13:33.052 [job0] 00:13:33.052 filename=/dev/nvme0n1 00:13:33.052 [job1] 00:13:33.052 filename=/dev/nvme0n2 00:13:33.052 [job2] 00:13:33.052 filename=/dev/nvme0n3 00:13:33.052 [job3] 00:13:33.052 filename=/dev/nvme0n4 00:13:33.052 Could not set queue depth (nvme0n1) 00:13:33.052 Could not set queue depth (nvme0n2) 00:13:33.052 Could not set queue depth (nvme0n3) 00:13:33.052 Could not set queue depth (nvme0n4) 00:13:33.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.052 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.052 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.052 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.052 fio-3.35 00:13:33.052 Starting 4 threads 00:13:33.987 00:13:33.987 job0: (groupid=0, jobs=1): err= 0: pid=71327: Sun Jul 14 21:12:45 2024 00:13:33.987 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:33.987 slat (nsec): min=16249, max=71567, avg=22623.15, stdev=7314.85 00:13:33.987 clat (usec): min=269, max=992, avg=343.21, stdev=77.90 00:13:33.987 lat (usec): min=292, max=1028, avg=365.84, stdev=82.53 00:13:33.987 clat percentiles (usec): 00:13:33.987 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 302], 00:13:33.987 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:13:33.987 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 371], 95.00th=[ 578], 00:13:33.987 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 873], 99.95th=[ 996], 00:13:33.987 | 99.99th=[ 996] 00:13:33.987 write: IOPS=1646, BW=6585KiB/s (6743kB/s)(6592KiB/1001msec); 0 zone resets 00:13:33.987 slat (nsec): min=20194, max=86729, avg=30773.74, stdev=6705.42 00:13:33.987 clat (usec): min=119, max=740, avg=229.96, stdev=47.36 00:13:33.987 lat (usec): min=141, max=769, avg=260.74, stdev=47.96 00:13:33.987 clat percentiles (usec): 00:13:33.987 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 198], 00:13:33.987 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 243], 00:13:33.987 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 277], 00:13:33.987 | 99.00th=[ 334], 99.50th=[ 478], 99.90th=[ 594], 99.95th=[ 742], 00:13:33.987 | 99.99th=[ 742] 00:13:33.987 bw ( KiB/s): min= 8192, max= 8192, per=24.69%, avg=8192.00, stdev= 0.00, samples=1 00:13:33.988 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:33.988 lat (usec) : 250=35.62%, 500=60.33%, 750=3.99%, 1000=0.06% 00:13:33.988 cpu : usr=1.70%, sys=6.70%, ctx=3185, majf=0, minf=7 00:13:33.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 issued rwts: total=1536,1648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.988 job1: (groupid=0, jobs=1): err= 0: pid=71328: Sun Jul 14 21:12:45 2024 00:13:33.988 read: IOPS=1528, BW=6114KiB/s (6261kB/s)(6120KiB/1001msec) 00:13:33.988 slat (nsec): min=16100, max=68891, avg=21295.46, stdev=6016.93 00:13:33.988 clat (usec): min=232, max=593, avg=333.95, stdev=48.94 00:13:33.988 lat (usec): min=257, max=646, avg=355.25, stdev=52.16 00:13:33.988 clat percentiles (usec): 00:13:33.988 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:13:33.988 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:13:33.988 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 465], 00:13:33.988 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 594], 00:13:33.988 | 99.99th=[ 594] 00:13:33.988 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:33.988 slat (nsec): min=21545, max=95033, avg=34018.66, stdev=9510.74 00:13:33.988 clat (usec): min=131, max=2708, avg=257.90, stdev=111.62 00:13:33.988 lat (usec): min=157, max=2743, avg=291.92, stdev=115.67 00:13:33.988 clat percentiles (usec): 00:13:33.988 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 210], 00:13:33.988 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:13:33.988 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 400], 95.00th=[ 429], 00:13:33.988 | 99.00th=[ 465], 99.50th=[ 611], 99.90th=[ 2024], 99.95th=[ 2704], 00:13:33.988 | 99.99th=[ 2704] 00:13:33.988 bw ( KiB/s): min= 8192, max= 8192, per=24.69%, avg=8192.00, stdev= 0.00, samples=1 00:13:33.988 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:33.988 lat (usec) : 250=31.25%, 500=67.22%, 750=1.47% 00:13:33.988 lat (msec) : 4=0.07% 00:13:33.988 cpu : usr=1.80%, sys=6.70%, ctx=3067, majf=0, minf=14 00:13:33.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 issued rwts: total=1530,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.988 job2: (groupid=0, jobs=1): err= 0: pid=71329: Sun Jul 14 21:12:45 2024 00:13:33.988 read: IOPS=2139, BW=8559KiB/s (8765kB/s)(8568KiB/1001msec) 00:13:33.988 slat (nsec): min=11777, max=49551, avg=15515.04, stdev=4148.50 00:13:33.988 clat (usec): min=185, max=541, avg=222.72, stdev=23.94 00:13:33.988 lat (usec): min=198, max=556, avg=238.23, stdev=24.67 00:13:33.988 clat percentiles (usec): 00:13:33.988 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:13:33.988 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:13:33.988 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 262], 00:13:33.988 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 359], 99.95th=[ 515], 00:13:33.988 | 99.99th=[ 545] 00:13:33.988 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:33.988 slat (nsec): min=14453, max=80879, avg=22319.50, stdev=5060.20 00:13:33.988 clat (usec): min=127, max=488, avg=165.66, stdev=20.14 00:13:33.988 lat (usec): min=146, max=514, avg=187.98, stdev=20.90 00:13:33.988 clat percentiles (usec): 00:13:33.988 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:13:33.988 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:13:33.988 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 202], 00:13:33.988 | 99.00th=[ 225], 99.50th=[ 241], 99.90th=[ 269], 99.95th=[ 277], 00:13:33.988 | 99.99th=[ 490] 00:13:33.988 bw ( KiB/s): min=10328, max=10328, per=31.12%, avg=10328.00, stdev= 0.00, samples=1 00:13:33.988 iops : min= 2582, max= 2582, avg=2582.00, stdev= 0.00, samples=1 00:13:33.988 lat (usec) : 250=94.96%, 500=5.00%, 750=0.04% 00:13:33.988 cpu : usr=2.40%, sys=6.50%, ctx=4703, majf=0, minf=3 00:13:33.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 issued rwts: total=2142,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.988 job3: (groupid=0, jobs=1): err= 0: pid=71330: Sun Jul 14 21:12:45 2024 00:13:33.988 read: IOPS=2293, BW=9175KiB/s (9395kB/s)(9184KiB/1001msec) 00:13:33.988 slat (nsec): min=12414, max=64609, avg=15766.30, stdev=4106.99 00:13:33.988 clat (usec): min=183, max=2138, avg=217.99, stdev=45.54 00:13:33.988 lat (usec): min=197, max=2155, avg=233.76, stdev=45.78 00:13:33.988 clat percentiles (usec): 00:13:33.988 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:13:33.988 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:13:33.988 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 00:13:33.988 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 515], 99.95th=[ 586], 00:13:33.988 | 99.99th=[ 2147] 00:13:33.988 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:33.988 slat (nsec): min=14627, max=78934, avg=22330.93, stdev=5605.80 00:13:33.988 clat (usec): min=126, max=2346, avg=155.10, stdev=46.55 00:13:33.988 lat (usec): min=145, max=2370, avg=177.43, stdev=47.21 00:13:33.988 clat percentiles (usec): 00:13:33.988 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:13:33.988 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 157], 00:13:33.988 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 186], 00:13:33.988 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 269], 99.95th=[ 302], 00:13:33.988 | 99.99th=[ 2343] 00:13:33.988 bw ( KiB/s): min=10968, max=10968, per=33.05%, avg=10968.00, stdev= 0.00, samples=1 00:13:33.988 iops : min= 2742, max= 2742, avg=2742.00, stdev= 0.00, samples=1 00:13:33.988 lat (usec) : 250=97.08%, 500=2.84%, 750=0.04% 00:13:33.988 lat (msec) : 4=0.04% 00:13:33.988 cpu : usr=1.90%, sys=7.30%, ctx=4857, majf=0, minf=11 00:13:33.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.988 issued rwts: total=2296,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.988 00:13:33.988 Run status group 0 (all jobs): 00:13:33.988 READ: bw=29.3MiB/s (30.7MB/s), 6114KiB/s-9175KiB/s (6261kB/s-9395kB/s), io=29.3MiB (30.7MB), run=1001-1001msec 00:13:33.988 WRITE: bw=32.4MiB/s (34.0MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.4MiB (34.0MB), run=1001-1001msec 00:13:33.988 00:13:33.988 Disk stats (read/write): 00:13:33.988 nvme0n1: ios=1276/1536, merge=0/0, ticks=457/371, in_queue=828, util=88.38% 00:13:33.988 nvme0n2: ios=1213/1536, merge=0/0, ticks=412/422, in_queue=834, util=88.77% 00:13:33.988 nvme0n3: ios=1975/2048, merge=0/0, ticks=455/351, in_queue=806, util=89.27% 00:13:33.988 nvme0n4: ios=2048/2104, merge=0/0, ticks=452/354, in_queue=806, util=89.82% 00:13:33.988 21:12:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:33.988 [global] 00:13:33.988 thread=1 00:13:33.988 invalidate=1 00:13:33.988 rw=randwrite 00:13:33.988 time_based=1 00:13:33.988 runtime=1 00:13:33.988 ioengine=libaio 00:13:33.988 direct=1 00:13:33.988 bs=4096 00:13:33.988 iodepth=1 00:13:33.988 norandommap=0 00:13:33.988 numjobs=1 00:13:33.988 00:13:33.988 verify_dump=1 00:13:33.988 verify_backlog=512 00:13:33.988 verify_state_save=0 00:13:33.988 do_verify=1 00:13:33.988 verify=crc32c-intel 00:13:33.988 [job0] 00:13:33.988 filename=/dev/nvme0n1 00:13:33.988 [job1] 00:13:33.988 filename=/dev/nvme0n2 00:13:33.988 [job2] 00:13:33.988 filename=/dev/nvme0n3 00:13:33.988 [job3] 00:13:33.988 filename=/dev/nvme0n4 00:13:33.988 Could not set queue depth (nvme0n1) 00:13:33.988 Could not set queue depth (nvme0n2) 00:13:33.988 Could not set queue depth (nvme0n3) 00:13:33.988 Could not set queue depth (nvme0n4) 00:13:34.246 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.246 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.246 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.247 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.247 fio-3.35 00:13:34.247 Starting 4 threads 00:13:35.623 00:13:35.623 job0: (groupid=0, jobs=1): err= 0: pid=71387: Sun Jul 14 21:12:46 2024 00:13:35.623 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:35.623 slat (nsec): min=11269, max=51072, avg=14463.04, stdev=3764.16 00:13:35.623 clat (usec): min=165, max=268, avg=193.00, stdev=15.31 00:13:35.623 lat (usec): min=178, max=285, avg=207.46, stdev=16.26 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 172], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:13:35.623 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:13:35.623 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 223], 00:13:35.623 | 99.00th=[ 237], 99.50th=[ 245], 99.90th=[ 262], 99.95th=[ 262], 00:13:35.623 | 99.99th=[ 269] 00:13:35.623 write: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:13:35.623 slat (nsec): min=16953, max=88360, avg=20493.19, stdev=5085.87 00:13:35.623 clat (usec): min=108, max=1579, avg=134.18, stdev=29.48 00:13:35.623 lat (usec): min=131, max=1598, avg=154.67, stdev=30.32 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 117], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 124], 00:13:35.623 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:13:35.623 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 159], 00:13:35.623 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 204], 99.95th=[ 223], 00:13:35.623 | 99.99th=[ 1582] 00:13:35.623 bw ( KiB/s): min=12263, max=12263, per=35.69%, avg=12263.00, stdev= 0.00, samples=1 00:13:35.623 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:35.623 lat (usec) : 250=99.82%, 500=0.16% 00:13:35.623 lat (msec) : 2=0.02% 00:13:35.623 cpu : usr=2.60%, sys=7.40%, ctx=5527, majf=0, minf=7 00:13:35.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.623 issued rwts: total=2560,2967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.623 job1: (groupid=0, jobs=1): err= 0: pid=71388: Sun Jul 14 21:12:46 2024 00:13:35.623 read: IOPS=1502, BW=6010KiB/s (6154kB/s)(6016KiB/1001msec) 00:13:35.623 slat (nsec): min=17421, max=72278, avg=23937.18, stdev=6326.59 00:13:35.623 clat (usec): min=185, max=718, avg=348.03, stdev=70.72 00:13:35.623 lat (usec): min=204, max=754, avg=371.97, stdev=74.26 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:13:35.623 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:13:35.623 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 562], 00:13:35.623 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 717], 00:13:35.623 | 99.99th=[ 717] 00:13:35.623 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:35.623 slat (nsec): min=24002, max=87646, avg=33763.42, stdev=6689.78 00:13:35.623 clat (usec): min=127, max=801, avg=247.42, stdev=44.17 00:13:35.623 lat (usec): min=157, max=835, avg=281.18, stdev=45.35 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 143], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 233], 00:13:35.623 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:13:35.623 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:13:35.623 | 99.00th=[ 318], 99.50th=[ 371], 99.90th=[ 693], 99.95th=[ 799], 00:13:35.623 | 99.99th=[ 799] 00:13:35.623 bw ( KiB/s): min= 8192, max= 8192, per=23.84%, avg=8192.00, stdev= 0.00, samples=1 00:13:35.623 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:35.623 lat (usec) : 250=20.03%, 500=77.11%, 750=2.83%, 1000=0.03% 00:13:35.623 cpu : usr=1.80%, sys=6.90%, ctx=3040, majf=0, minf=12 00:13:35.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.623 issued rwts: total=1504,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.623 job2: (groupid=0, jobs=1): err= 0: pid=71389: Sun Jul 14 21:12:46 2024 00:13:35.623 read: IOPS=1438, BW=5754KiB/s (5892kB/s)(5760KiB/1001msec) 00:13:35.623 slat (nsec): min=15502, max=68783, avg=21555.86, stdev=4219.71 00:13:35.623 clat (usec): min=247, max=2519, avg=343.38, stdev=73.63 00:13:35.623 lat (usec): min=272, max=2553, avg=364.94, stdev=75.04 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:13:35.623 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:13:35.623 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 461], 00:13:35.623 | 99.00th=[ 523], 99.50th=[ 586], 99.90th=[ 930], 99.95th=[ 2507], 00:13:35.623 | 99.99th=[ 2507] 00:13:35.623 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:35.623 slat (nsec): min=22053, max=97080, avg=36546.44, stdev=8191.14 00:13:35.623 clat (usec): min=142, max=1295, avg=266.69, stdev=73.38 00:13:35.623 lat (usec): min=171, max=1331, avg=303.23, stdev=77.30 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 149], 5.00th=[ 165], 10.00th=[ 178], 20.00th=[ 239], 00:13:35.623 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:13:35.623 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 375], 95.00th=[ 424], 00:13:35.623 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 1004], 99.95th=[ 1303], 00:13:35.623 | 99.99th=[ 1303] 00:13:35.623 bw ( KiB/s): min= 7976, max= 7976, per=23.21%, avg=7976.00, stdev= 0.00, samples=1 00:13:35.623 iops : min= 1994, max= 1994, avg=1994.00, stdev= 0.00, samples=1 00:13:35.623 lat (usec) : 250=18.85%, 500=80.11%, 750=0.91%, 1000=0.07% 00:13:35.623 lat (msec) : 2=0.03%, 4=0.03% 00:13:35.623 cpu : usr=1.90%, sys=6.90%, ctx=2977, majf=0, minf=15 00:13:35.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.623 issued rwts: total=1440,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.623 job3: (groupid=0, jobs=1): err= 0: pid=71390: Sun Jul 14 21:12:46 2024 00:13:35.623 read: IOPS=2455, BW=9822KiB/s (10.1MB/s)(9832KiB/1001msec) 00:13:35.623 slat (usec): min=11, max=112, avg=14.43, stdev= 3.73 00:13:35.623 clat (usec): min=128, max=503, avg=208.13, stdev=17.21 00:13:35.623 lat (usec): min=194, max=517, avg=222.56, stdev=17.48 00:13:35.623 clat percentiles (usec): 00:13:35.623 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:13:35.623 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:13:35.623 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:13:35.624 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 338], 99.95th=[ 351], 00:13:35.624 | 99.99th=[ 502] 00:13:35.624 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:35.624 slat (nsec): min=15384, max=78631, avg=20654.78, stdev=5002.62 00:13:35.624 clat (usec): min=127, max=2859, avg=152.60, stdev=56.84 00:13:35.624 lat (usec): min=145, max=2894, avg=173.26, stdev=57.62 00:13:35.624 clat percentiles (usec): 00:13:35.624 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 139], 00:13:35.624 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:13:35.624 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 186], 00:13:35.624 | 99.00th=[ 231], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 322], 00:13:35.624 | 99.99th=[ 2868] 00:13:35.624 bw ( KiB/s): min=12288, max=12288, per=35.76%, avg=12288.00, stdev= 0.00, samples=1 00:13:35.624 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:35.624 lat (usec) : 250=98.92%, 500=1.04%, 750=0.02% 00:13:35.624 lat (msec) : 4=0.02% 00:13:35.624 cpu : usr=1.90%, sys=7.00%, ctx=5020, majf=0, minf=11 00:13:35.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.624 issued rwts: total=2458,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.624 00:13:35.624 Run status group 0 (all jobs): 00:13:35.624 READ: bw=31.1MiB/s (32.6MB/s), 5754KiB/s-9.99MiB/s (5892kB/s-10.5MB/s), io=31.1MiB (32.6MB), run=1001-1001msec 00:13:35.624 WRITE: bw=33.6MiB/s (35.2MB/s), 6138KiB/s-11.6MiB/s (6285kB/s-12.1MB/s), io=33.6MiB (35.2MB), run=1001-1001msec 00:13:35.624 00:13:35.624 Disk stats (read/write): 00:13:35.624 nvme0n1: ios=2273/2560, merge=0/0, ticks=492/360, in_queue=852, util=89.18% 00:13:35.624 nvme0n2: ios=1166/1536, merge=0/0, ticks=430/407, in_queue=837, util=88.55% 00:13:35.624 nvme0n3: ios=1072/1536, merge=0/0, ticks=375/434, in_queue=809, util=89.36% 00:13:35.624 nvme0n4: ios=2048/2329, merge=0/0, ticks=441/362, in_queue=803, util=89.82% 00:13:35.624 21:12:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:35.624 [global] 00:13:35.624 thread=1 00:13:35.624 invalidate=1 00:13:35.624 rw=write 00:13:35.624 time_based=1 00:13:35.624 runtime=1 00:13:35.624 ioengine=libaio 00:13:35.624 direct=1 00:13:35.624 bs=4096 00:13:35.624 iodepth=128 00:13:35.624 norandommap=0 00:13:35.624 numjobs=1 00:13:35.624 00:13:35.624 verify_dump=1 00:13:35.624 verify_backlog=512 00:13:35.624 verify_state_save=0 00:13:35.624 do_verify=1 00:13:35.624 verify=crc32c-intel 00:13:35.624 [job0] 00:13:35.624 filename=/dev/nvme0n1 00:13:35.624 [job1] 00:13:35.624 filename=/dev/nvme0n2 00:13:35.624 [job2] 00:13:35.624 filename=/dev/nvme0n3 00:13:35.624 [job3] 00:13:35.624 filename=/dev/nvme0n4 00:13:35.624 Could not set queue depth (nvme0n1) 00:13:35.624 Could not set queue depth (nvme0n2) 00:13:35.624 Could not set queue depth (nvme0n3) 00:13:35.624 Could not set queue depth (nvme0n4) 00:13:35.624 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.624 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.624 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.624 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:35.624 fio-3.35 00:13:35.624 Starting 4 threads 00:13:36.999 00:13:36.999 job0: (groupid=0, jobs=1): err= 0: pid=71451: Sun Jul 14 21:12:48 2024 00:13:36.999 read: IOPS=2234, BW=8937KiB/s (9152kB/s)(8964KiB/1003msec) 00:13:36.999 slat (usec): min=7, max=7122, avg=208.17, stdev=1065.62 00:13:36.999 clat (usec): min=1152, max=29153, avg=26541.54, stdev=3212.51 00:13:36.999 lat (usec): min=6560, max=29180, avg=26749.71, stdev=3027.45 00:13:36.999 clat percentiles (usec): 00:13:36.999 | 1.00th=[ 7046], 5.00th=[21365], 10.00th=[25822], 20.00th=[26346], 00:13:36.999 | 30.00th=[26608], 40.00th=[27132], 50.00th=[27132], 60.00th=[27395], 00:13:36.999 | 70.00th=[27395], 80.00th=[27919], 90.00th=[28443], 95.00th=[28705], 00:13:36.999 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:13:36.999 | 99.99th=[29230] 00:13:36.999 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:13:36.999 slat (usec): min=11, max=6774, avg=201.67, stdev=1000.20 00:13:36.999 clat (usec): min=19093, max=29546, avg=25994.71, stdev=1322.70 00:13:36.999 lat (usec): min=19743, max=29599, avg=26196.38, stdev=886.19 00:13:36.999 clat percentiles (usec): 00:13:36.999 | 1.00th=[19792], 5.00th=[24773], 10.00th=[25297], 20.00th=[25297], 00:13:36.999 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:13:36.999 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27395], 95.00th=[27657], 00:13:36.999 | 99.00th=[28443], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:13:36.999 | 99.99th=[29492] 00:13:36.999 bw ( KiB/s): min=10232, max=10268, per=17.36%, avg=10250.00, stdev=25.46, samples=2 00:13:36.999 iops : min= 2558, max= 2567, avg=2562.50, stdev= 6.36, samples=2 00:13:36.999 lat (msec) : 2=0.02%, 10=0.67%, 20=1.33%, 50=97.98% 00:13:36.999 cpu : usr=1.80%, sys=6.99%, ctx=151, majf=0, minf=17 00:13:36.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:36.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.999 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.999 job1: (groupid=0, jobs=1): err= 0: pid=71452: Sun Jul 14 21:12:48 2024 00:13:36.999 read: IOPS=4972, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:13:36.999 slat (usec): min=7, max=6845, avg=93.73, stdev=577.44 00:13:36.999 clat (usec): min=2187, max=21014, avg=13093.28, stdev=1562.64 00:13:36.999 lat (usec): min=6207, max=24644, avg=13187.01, stdev=1586.04 00:13:36.999 clat percentiles (usec): 00:13:36.999 | 1.00th=[ 7242], 5.00th=[11207], 10.00th=[12256], 20.00th=[12518], 00:13:36.999 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:13:36.999 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14484], 00:13:36.999 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20841], 99.95th=[21103], 00:13:36.999 | 99.99th=[21103] 00:13:36.999 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:36.999 slat (usec): min=10, max=8125, avg=96.04, stdev=556.39 00:13:36.999 clat (usec): min=6371, max=16279, avg=12067.29, stdev=1156.99 00:13:36.999 lat (usec): min=8396, max=16707, avg=12163.33, stdev=1044.89 00:13:36.999 clat percentiles (usec): 00:13:36.999 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:13:36.999 | 30.00th=[11731], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:13:36.999 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:13:36.999 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16188], 99.95th=[16188], 00:13:37.000 | 99.99th=[16319] 00:13:37.000 bw ( KiB/s): min=20439, max=20480, per=34.65%, avg=20459.50, stdev=28.99, samples=2 00:13:37.000 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:13:37.000 lat (msec) : 4=0.01%, 10=4.05%, 20=95.63%, 50=0.32% 00:13:37.000 cpu : usr=5.29%, sys=13.67%, ctx=218, majf=0, minf=15 00:13:37.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:37.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.000 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.000 job2: (groupid=0, jobs=1): err= 0: pid=71453: Sun Jul 14 21:12:48 2024 00:13:37.000 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:37.000 slat (usec): min=6, max=4692, avg=112.64, stdev=451.04 00:13:37.000 clat (usec): min=11267, max=20357, avg=14798.22, stdev=1197.14 00:13:37.000 lat (usec): min=11297, max=20370, avg=14910.86, stdev=1250.14 00:13:37.000 clat percentiles (usec): 00:13:37.000 | 1.00th=[11731], 5.00th=[12911], 10.00th=[13698], 20.00th=[14222], 00:13:37.000 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:13:37.000 | 70.00th=[14877], 80.00th=[15139], 90.00th=[16581], 95.00th=[17171], 00:13:37.000 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20055], 99.95th=[20317], 00:13:37.000 | 99.99th=[20317] 00:13:37.000 write: IOPS=4551, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1003msec); 0 zone resets 00:13:37.000 slat (usec): min=13, max=4245, avg=110.12, stdev=485.68 00:13:37.000 clat (usec): min=482, max=19020, avg=14458.48, stdev=1632.17 00:13:37.000 lat (usec): min=3686, max=19071, avg=14568.60, stdev=1686.99 00:13:37.000 clat percentiles (usec): 00:13:37.000 | 1.00th=[ 8094], 5.00th=[12911], 10.00th=[13435], 20.00th=[13829], 00:13:37.000 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:13:37.000 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15926], 95.00th=[16909], 00:13:37.000 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:13:37.000 | 99.99th=[19006] 00:13:37.000 bw ( KiB/s): min=17536, max=17960, per=30.06%, avg=17748.00, stdev=299.81, samples=2 00:13:37.000 iops : min= 4384, max= 4490, avg=4437.00, stdev=74.95, samples=2 00:13:37.000 lat (usec) : 500=0.01% 00:13:37.000 lat (msec) : 4=0.13%, 10=0.84%, 20=98.91%, 50=0.10% 00:13:37.000 cpu : usr=5.69%, sys=11.98%, ctx=409, majf=0, minf=7 00:13:37.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:37.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.000 issued rwts: total=4096,4565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.000 job3: (groupid=0, jobs=1): err= 0: pid=71454: Sun Jul 14 21:12:48 2024 00:13:37.000 read: IOPS=2234, BW=8937KiB/s (9152kB/s)(8964KiB/1003msec) 00:13:37.000 slat (usec): min=6, max=6990, avg=208.90, stdev=1067.89 00:13:37.000 clat (usec): min=847, max=28956, avg=26482.25, stdev=3127.19 00:13:37.000 lat (usec): min=6984, max=28976, avg=26691.15, stdev=2935.48 00:13:37.000 clat percentiles (usec): 00:13:37.000 | 1.00th=[ 7439], 5.00th=[21365], 10.00th=[25822], 20.00th=[26346], 00:13:37.000 | 30.00th=[26608], 40.00th=[27132], 50.00th=[27132], 60.00th=[27395], 00:13:37.000 | 70.00th=[27395], 80.00th=[27657], 90.00th=[28443], 95.00th=[28705], 00:13:37.000 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:13:37.000 | 99.99th=[28967] 00:13:37.000 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:13:37.000 slat (usec): min=11, max=6428, avg=201.08, stdev=992.53 00:13:37.000 clat (usec): min=19335, max=28268, avg=26072.90, stdev=1253.75 00:13:37.000 lat (usec): min=19971, max=28594, avg=26273.99, stdev=780.17 00:13:37.000 clat percentiles (usec): 00:13:37.000 | 1.00th=[20055], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:13:37.000 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:13:37.000 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27657], 95.00th=[27657], 00:13:37.000 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:13:37.000 | 99.99th=[28181] 00:13:37.000 bw ( KiB/s): min=10232, max=10248, per=17.34%, avg=10240.00, stdev=11.31, samples=2 00:13:37.000 iops : min= 2558, max= 2562, avg=2560.00, stdev= 2.83, samples=2 00:13:37.000 lat (usec) : 1000=0.02% 00:13:37.000 lat (msec) : 10=0.67%, 20=1.02%, 50=98.29% 00:13:37.000 cpu : usr=1.50%, sys=8.58%, ctx=151, majf=0, minf=11 00:13:37.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:37.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.000 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.000 00:13:37.000 Run status group 0 (all jobs): 00:13:37.000 READ: bw=52.8MiB/s (55.4MB/s), 8937KiB/s-19.4MiB/s (9152kB/s-20.4MB/s), io=53.0MiB (55.6MB), run=1003-1003msec 00:13:37.000 WRITE: bw=57.7MiB/s (60.5MB/s), 9.97MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=57.8MiB (60.6MB), run=1003-1003msec 00:13:37.000 00:13:37.000 Disk stats (read/write): 00:13:37.000 nvme0n1: ios=2098/2048, merge=0/0, ticks=11941/10564, in_queue=22505, util=87.58% 00:13:37.000 nvme0n2: ios=4137/4480, merge=0/0, ticks=50779/49544, in_queue=100323, util=87.65% 00:13:37.000 nvme0n3: ios=3584/3791, merge=0/0, ticks=16849/15601, in_queue=32450, util=89.18% 00:13:37.000 nvme0n4: ios=2048/2048, merge=0/0, ticks=13134/12306, in_queue=25440, util=89.64% 00:13:37.000 21:12:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:37.000 [global] 00:13:37.000 thread=1 00:13:37.000 invalidate=1 00:13:37.000 rw=randwrite 00:13:37.000 time_based=1 00:13:37.000 runtime=1 00:13:37.000 ioengine=libaio 00:13:37.000 direct=1 00:13:37.000 bs=4096 00:13:37.000 iodepth=128 00:13:37.000 norandommap=0 00:13:37.000 numjobs=1 00:13:37.000 00:13:37.000 verify_dump=1 00:13:37.000 verify_backlog=512 00:13:37.000 verify_state_save=0 00:13:37.000 do_verify=1 00:13:37.000 verify=crc32c-intel 00:13:37.000 [job0] 00:13:37.000 filename=/dev/nvme0n1 00:13:37.000 [job1] 00:13:37.000 filename=/dev/nvme0n2 00:13:37.000 [job2] 00:13:37.000 filename=/dev/nvme0n3 00:13:37.000 [job3] 00:13:37.000 filename=/dev/nvme0n4 00:13:37.000 Could not set queue depth (nvme0n1) 00:13:37.000 Could not set queue depth (nvme0n2) 00:13:37.000 Could not set queue depth (nvme0n3) 00:13:37.000 Could not set queue depth (nvme0n4) 00:13:37.000 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.000 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.000 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.000 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:37.000 fio-3.35 00:13:37.000 Starting 4 threads 00:13:38.377 00:13:38.377 job0: (groupid=0, jobs=1): err= 0: pid=71507: Sun Jul 14 21:12:49 2024 00:13:38.377 read: IOPS=4734, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec) 00:13:38.377 slat (usec): min=9, max=3811, avg=99.73, stdev=433.35 00:13:38.377 clat (usec): min=624, max=17352, avg=13213.39, stdev=1501.02 00:13:38.377 lat (usec): min=637, max=17368, avg=13313.12, stdev=1441.69 00:13:38.377 clat percentiles (usec): 00:13:38.378 | 1.00th=[ 6521], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:13:38.378 | 30.00th=[12911], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:13:38.378 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14484], 95.00th=[15533], 00:13:38.378 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:13:38.378 | 99.99th=[17433] 00:13:38.378 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:13:38.378 slat (usec): min=10, max=4320, avg=95.25, stdev=407.85 00:13:38.378 clat (usec): min=9500, max=17137, avg=12493.47, stdev=688.77 00:13:38.378 lat (usec): min=9816, max=17188, avg=12588.71, stdev=559.80 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[10028], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:13:38.378 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:13:38.378 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13435], 00:13:38.378 | 99.00th=[14484], 99.50th=[14877], 99.90th=[17171], 99.95th=[17171], 00:13:38.378 | 99.99th=[17171] 00:13:38.378 bw ( KiB/s): min=20480, max=20480, per=35.00%, avg=20480.00, stdev= 0.00, samples=1 00:13:38.378 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:38.378 lat (usec) : 750=0.03% 00:13:38.378 lat (msec) : 4=0.32%, 10=1.11%, 20=98.54% 00:13:38.378 cpu : usr=5.10%, sys=13.10%, ctx=367, majf=0, minf=9 00:13:38.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:38.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.378 issued rwts: total=4739,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.378 job1: (groupid=0, jobs=1): err= 0: pid=71508: Sun Jul 14 21:12:49 2024 00:13:38.378 read: IOPS=2265, BW=9060KiB/s (9278kB/s)(9196KiB/1015msec) 00:13:38.378 slat (usec): min=5, max=14141, avg=219.81, stdev=954.66 00:13:38.378 clat (usec): min=13112, max=44481, avg=28590.54, stdev=4565.72 00:13:38.378 lat (usec): min=15273, max=44493, avg=28810.35, stdev=4591.02 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[16319], 5.00th=[21365], 10.00th=[23462], 20.00th=[25822], 00:13:38.378 | 30.00th=[26608], 40.00th=[27657], 50.00th=[28181], 60.00th=[28967], 00:13:38.378 | 70.00th=[29754], 80.00th=[32113], 90.00th=[34866], 95.00th=[36963], 00:13:38.378 | 99.00th=[39584], 99.50th=[41681], 99.90th=[43779], 99.95th=[44303], 00:13:38.378 | 99.99th=[44303] 00:13:38.378 write: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec); 0 zone resets 00:13:38.378 slat (usec): min=5, max=12074, avg=186.24, stdev=850.39 00:13:38.378 clat (usec): min=10372, max=35524, avg=24576.14, stdev=4347.84 00:13:38.378 lat (usec): min=12479, max=35574, avg=24762.38, stdev=4292.92 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[14222], 5.00th=[16909], 10.00th=[18744], 20.00th=[20579], 00:13:38.378 | 30.00th=[21890], 40.00th=[23987], 50.00th=[25560], 60.00th=[26346], 00:13:38.378 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[31065], 00:13:38.378 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:13:38.378 | 99.99th=[35390] 00:13:38.378 bw ( KiB/s): min= 9936, max=10565, per=17.52%, avg=10250.50, stdev=444.77, samples=2 00:13:38.378 iops : min= 2484, max= 2641, avg=2562.50, stdev=111.02, samples=2 00:13:38.378 lat (msec) : 20=9.96%, 50=90.04% 00:13:38.378 cpu : usr=1.87%, sys=7.78%, ctx=577, majf=0, minf=12 00:13:38.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:38.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.378 issued rwts: total=2299,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.378 job2: (groupid=0, jobs=1): err= 0: pid=71509: Sun Jul 14 21:12:49 2024 00:13:38.378 read: IOPS=2065, BW=8261KiB/s (8459kB/s)(8352KiB/1011msec) 00:13:38.378 slat (usec): min=8, max=9900, avg=226.44, stdev=885.00 00:13:38.378 clat (usec): min=3862, max=42202, avg=28591.92, stdev=4496.14 00:13:38.378 lat (usec): min=13413, max=42220, avg=28818.36, stdev=4486.09 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[16909], 5.00th=[21627], 10.00th=[23987], 20.00th=[25560], 00:13:38.378 | 30.00th=[26346], 40.00th=[27395], 50.00th=[27919], 60.00th=[28443], 00:13:38.378 | 70.00th=[29754], 80.00th=[32637], 90.00th=[35390], 95.00th=[36963], 00:13:38.378 | 99.00th=[39584], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:13:38.378 | 99.99th=[42206] 00:13:38.378 write: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec); 0 zone resets 00:13:38.378 slat (usec): min=5, max=12259, avg=201.24, stdev=898.47 00:13:38.378 clat (usec): min=15575, max=38534, avg=26098.49, stdev=4201.66 00:13:38.378 lat (usec): min=15596, max=40098, avg=26299.73, stdev=4254.82 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[16188], 5.00th=[17957], 10.00th=[19792], 20.00th=[23200], 00:13:38.378 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[27132], 00:13:38.378 | 70.00th=[27657], 80.00th=[28967], 90.00th=[30016], 95.00th=[34341], 00:13:38.378 | 99.00th=[36963], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:13:38.378 | 99.99th=[38536] 00:13:38.378 bw ( KiB/s): min= 9200, max=10576, per=16.90%, avg=9888.00, stdev=972.98, samples=2 00:13:38.378 iops : min= 2300, max= 2644, avg=2472.00, stdev=243.24, samples=2 00:13:38.378 lat (msec) : 4=0.02%, 20=6.93%, 50=93.05% 00:13:38.378 cpu : usr=2.38%, sys=6.73%, ctx=578, majf=0, minf=17 00:13:38.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:13:38.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.378 issued rwts: total=2088,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.378 job3: (groupid=0, jobs=1): err= 0: pid=71510: Sun Jul 14 21:12:49 2024 00:13:38.378 read: IOPS=4321, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1005msec) 00:13:38.378 slat (usec): min=8, max=6404, avg=112.66, stdev=514.10 00:13:38.378 clat (usec): min=4391, max=20761, avg=14479.19, stdev=1848.52 00:13:38.378 lat (usec): min=4405, max=26482, avg=14591.85, stdev=1860.62 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[ 5342], 5.00th=[11600], 10.00th=[12649], 20.00th=[13698], 00:13:38.378 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:13:38.378 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15664], 95.00th=[17695], 00:13:38.378 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20579], 99.95th=[20579], 00:13:38.378 | 99.99th=[20841] 00:13:38.378 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:13:38.378 slat (usec): min=10, max=6166, avg=102.68, stdev=584.87 00:13:38.378 clat (usec): min=6415, max=21285, avg=13893.18, stdev=1562.30 00:13:38.378 lat (usec): min=6440, max=21308, avg=13995.86, stdev=1651.44 00:13:38.378 clat percentiles (usec): 00:13:38.378 | 1.00th=[10028], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:13:38.378 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:13:38.378 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15533], 95.00th=[17171], 00:13:38.378 | 99.00th=[19006], 99.50th=[19792], 99.90th=[21365], 99.95th=[21365], 00:13:38.378 | 99.99th=[21365] 00:13:38.378 bw ( KiB/s): min=18324, max=18576, per=31.53%, avg=18450.00, stdev=178.19, samples=2 00:13:38.378 iops : min= 4581, max= 4644, avg=4612.50, stdev=44.55, samples=2 00:13:38.378 lat (msec) : 10=1.35%, 20=98.29%, 50=0.36% 00:13:38.378 cpu : usr=3.88%, sys=13.55%, ctx=317, majf=0, minf=11 00:13:38.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:38.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:38.378 issued rwts: total=4343,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:38.378 00:13:38.378 Run status group 0 (all jobs): 00:13:38.378 READ: bw=51.8MiB/s (54.4MB/s), 8261KiB/s-18.5MiB/s (8459kB/s-19.4MB/s), io=52.6MiB (55.2MB), run=1001-1015msec 00:13:38.378 WRITE: bw=57.1MiB/s (59.9MB/s), 9.85MiB/s-20.0MiB/s (10.3MB/s-20.9MB/s), io=58.0MiB (60.8MB), run=1001-1015msec 00:13:38.378 00:13:38.378 Disk stats (read/write): 00:13:38.378 nvme0n1: ios=4146/4352, merge=0/0, ticks=12597/11814, in_queue=24411, util=88.37% 00:13:38.378 nvme0n2: ios=2058/2048, merge=0/0, ticks=28378/23861, in_queue=52239, util=88.57% 00:13:38.378 nvme0n3: ios=1856/2048, merge=0/0, ticks=25892/24659, in_queue=50551, util=87.60% 00:13:38.378 nvme0n4: ios=3584/4096, merge=0/0, ticks=25568/23580, in_queue=49148, util=89.62% 00:13:38.378 21:12:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:38.378 21:12:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71524 00:13:38.378 21:12:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:38.378 21:12:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:38.378 [global] 00:13:38.378 thread=1 00:13:38.378 invalidate=1 00:13:38.378 rw=read 00:13:38.378 time_based=1 00:13:38.378 runtime=10 00:13:38.378 ioengine=libaio 00:13:38.378 direct=1 00:13:38.378 bs=4096 00:13:38.378 iodepth=1 00:13:38.378 norandommap=1 00:13:38.378 numjobs=1 00:13:38.378 00:13:38.378 [job0] 00:13:38.378 filename=/dev/nvme0n1 00:13:38.378 [job1] 00:13:38.378 filename=/dev/nvme0n2 00:13:38.378 [job2] 00:13:38.378 filename=/dev/nvme0n3 00:13:38.378 [job3] 00:13:38.378 filename=/dev/nvme0n4 00:13:38.378 Could not set queue depth (nvme0n1) 00:13:38.378 Could not set queue depth (nvme0n2) 00:13:38.378 Could not set queue depth (nvme0n3) 00:13:38.378 Could not set queue depth (nvme0n4) 00:13:38.378 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.378 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.378 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.378 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.378 fio-3.35 00:13:38.378 Starting 4 threads 00:13:41.664 21:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:41.664 fio: pid=71567, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:41.664 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=36790272, buflen=4096 00:13:41.664 21:12:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:41.664 fio: pid=71566, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:41.664 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=56442880, buflen=4096 00:13:41.664 21:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:41.664 21:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:41.923 fio: pid=71564, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:41.923 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=44421120, buflen=4096 00:13:42.180 21:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:42.180 21:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:42.180 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=66433024, buflen=4096 00:13:42.180 fio: pid=71565, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:42.439 00:13:42.439 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71564: Sun Jul 14 21:12:53 2024 00:13:42.439 read: IOPS=3154, BW=12.3MiB/s (12.9MB/s)(42.4MiB/3438msec) 00:13:42.439 slat (usec): min=8, max=14474, avg=16.86, stdev=193.72 00:13:42.439 clat (usec): min=3, max=3031, avg=298.76, stdev=55.37 00:13:42.439 lat (usec): min=170, max=14702, avg=315.62, stdev=203.80 00:13:42.439 clat percentiles (usec): 00:13:42.439 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 243], 20.00th=[ 281], 00:13:42.439 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:13:42.439 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 355], 00:13:42.439 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 490], 99.95th=[ 660], 00:13:42.439 | 99.99th=[ 2409] 00:13:42.439 bw ( KiB/s): min=12031, max=12392, per=23.62%, avg=12261.17, stdev=134.46, samples=6 00:13:42.439 iops : min= 3007, max= 3098, avg=3065.17, stdev=33.87, samples=6 00:13:42.439 lat (usec) : 4=0.01%, 250=11.11%, 500=88.78%, 750=0.06%, 1000=0.01% 00:13:42.439 lat (msec) : 4=0.02% 00:13:42.439 cpu : usr=1.11%, sys=3.81%, ctx=10857, majf=0, minf=1 00:13:42.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 issued rwts: total=10846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.439 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71565: Sun Jul 14 21:12:53 2024 00:13:42.439 read: IOPS=4225, BW=16.5MiB/s (17.3MB/s)(63.4MiB/3839msec) 00:13:42.439 slat (usec): min=8, max=10767, avg=17.08, stdev=131.54 00:13:42.439 clat (usec): min=157, max=153911, avg=218.05, stdev=1208.37 00:13:42.439 lat (usec): min=171, max=153961, avg=235.14, stdev=1216.36 00:13:42.439 clat percentiles (usec): 00:13:42.439 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:13:42.439 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:13:42.439 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 262], 00:13:42.439 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 848], 99.95th=[ 1090], 00:13:42.439 | 99.99th=[ 3687] 00:13:42.439 bw ( KiB/s): min=10944, max=18440, per=32.75%, avg=17004.14, stdev=2714.95, samples=7 00:13:42.439 iops : min= 2736, max= 4610, avg=4251.00, stdev=678.73, samples=7 00:13:42.439 lat (usec) : 250=92.74%, 500=7.01%, 750=0.10%, 1000=0.09% 00:13:42.439 lat (msec) : 2=0.03%, 4=0.02%, 250=0.01% 00:13:42.439 cpu : usr=1.17%, sys=5.45%, ctx=16233, majf=0, minf=1 00:13:42.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 issued rwts: total=16220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.439 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71566: Sun Jul 14 21:12:53 2024 00:13:42.439 read: IOPS=4350, BW=17.0MiB/s (17.8MB/s)(53.8MiB/3168msec) 00:13:42.439 slat (usec): min=11, max=11863, avg=16.77, stdev=125.37 00:13:42.439 clat (usec): min=177, max=1815, avg=211.31, stdev=38.65 00:13:42.439 lat (usec): min=190, max=12106, avg=228.08, stdev=131.74 00:13:42.439 clat percentiles (usec): 00:13:42.439 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:13:42.439 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:13:42.439 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 247], 00:13:42.439 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 506], 99.95th=[ 938], 00:13:42.439 | 99.99th=[ 1778] 00:13:42.439 bw ( KiB/s): min=16912, max=17752, per=33.59%, avg=17439.67, stdev=403.62, samples=6 00:13:42.439 iops : min= 4228, max= 4438, avg=4359.83, stdev=101.03, samples=6 00:13:42.439 lat (usec) : 250=96.11%, 500=3.78%, 750=0.04%, 1000=0.02% 00:13:42.439 lat (msec) : 2=0.04% 00:13:42.439 cpu : usr=1.71%, sys=5.56%, ctx=13784, majf=0, minf=1 00:13:42.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 issued rwts: total=13781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.439 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71567: Sun Jul 14 21:12:53 2024 00:13:42.439 read: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(35.1MiB/2940msec) 00:13:42.439 slat (nsec): min=8960, max=71276, avg=14891.20, stdev=5390.39 00:13:42.439 clat (usec): min=239, max=2466, avg=310.72, stdev=34.05 00:13:42.439 lat (usec): min=252, max=2489, avg=325.61, stdev=34.31 00:13:42.439 clat percentiles (usec): 00:13:42.439 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:13:42.439 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:13:42.439 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 355], 00:13:42.439 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 445], 99.95th=[ 578], 00:13:42.439 | 99.99th=[ 2474] 00:13:42.439 bw ( KiB/s): min=12048, max=12384, per=23.61%, avg=12257.60, stdev=140.33, samples=5 00:13:42.439 iops : min= 3012, max= 3096, avg=3064.40, stdev=35.08, samples=5 00:13:42.439 lat (usec) : 250=0.01%, 500=99.91%, 750=0.06% 00:13:42.439 lat (msec) : 4=0.01% 00:13:42.439 cpu : usr=1.09%, sys=4.32%, ctx=8988, majf=0, minf=1 00:13:42.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:42.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.439 issued rwts: total=8983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:42.439 00:13:42.439 Run status group 0 (all jobs): 00:13:42.440 READ: bw=50.7MiB/s (53.2MB/s), 11.9MiB/s-17.0MiB/s (12.5MB/s-17.8MB/s), io=195MiB (204MB), run=2940-3839msec 00:13:42.440 00:13:42.440 Disk stats (read/write): 00:13:42.440 nvme0n1: ios=10606/0, merge=0/0, ticks=2996/0, in_queue=2996, util=95.42% 00:13:42.440 nvme0n2: ios=15195/0, merge=0/0, ticks=3404/0, in_queue=3404, util=95.99% 00:13:42.440 nvme0n3: ios=13577/0, merge=0/0, ticks=2910/0, in_queue=2910, util=96.30% 00:13:42.440 nvme0n4: ios=8772/0, merge=0/0, ticks=2628/0, in_queue=2628, util=96.73% 00:13:42.440 21:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:42.440 21:12:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:42.698 21:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:42.698 21:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:43.264 21:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.264 21:12:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:43.523 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:43.523 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:44.092 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:44.092 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:44.351 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:44.351 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 71524 00:13:44.351 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:44.352 nvmf hotplug test: fio failed as expected 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:44.352 21:12:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.611 rmmod nvme_tcp 00:13:44.611 rmmod nvme_fabrics 00:13:44.611 rmmod nvme_keyring 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71142 ']' 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71142 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 71142 ']' 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 71142 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71142 00:13:44.611 killing process with pid 71142 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71142' 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 71142 00:13:44.611 21:12:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 71142 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.990 21:12:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.991 21:12:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:45.991 00:13:45.991 real 0m21.151s 00:13:45.991 user 1m17.306s 00:13:45.991 sys 0m10.643s 00:13:45.991 21:12:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.991 21:12:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.991 ************************************ 00:13:45.991 END TEST nvmf_fio_target 00:13:45.991 ************************************ 00:13:45.991 21:12:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:45.991 21:12:57 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:45.991 21:12:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:45.991 21:12:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.991 21:12:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.991 ************************************ 00:13:45.991 START TEST nvmf_bdevio 00:13:45.991 ************************************ 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:45.991 * Looking for test storage... 00:13:45.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:45.991 Cannot find device "nvmf_tgt_br" 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.991 Cannot find device "nvmf_tgt_br2" 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:45.991 Cannot find device "nvmf_tgt_br" 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:45.991 Cannot find device "nvmf_tgt_br2" 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.991 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.251 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:46.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:13:46.251 00:13:46.251 --- 10.0.0.2 ping statistics --- 00:13:46.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.252 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:46.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:46.252 00:13:46.252 --- 10.0.0.3 ping statistics --- 00:13:46.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.252 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:46.252 00:13:46.252 --- 10.0.0.1 ping statistics --- 00:13:46.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.252 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=71848 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 71848 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 71848 ']' 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.252 21:12:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.511 [2024-07-14 21:12:57.824820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:46.511 [2024-07-14 21:12:57.824996] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.511 [2024-07-14 21:12:58.002424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.770 [2024-07-14 21:12:58.237995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.770 [2024-07-14 21:12:58.238077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.770 [2024-07-14 21:12:58.238099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.770 [2024-07-14 21:12:58.238116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.770 [2024-07-14 21:12:58.238133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.770 [2024-07-14 21:12:58.238359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.770 [2024-07-14 21:12:58.238520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:46.770 [2024-07-14 21:12:58.239163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.770 [2024-07-14 21:12:58.239172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:47.030 [2024-07-14 21:12:58.407917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.289 [2024-07-14 21:12:58.773842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.289 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 Malloc0 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 [2024-07-14 21:12:58.887280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:47.549 { 00:13:47.549 "params": { 00:13:47.549 "name": "Nvme$subsystem", 00:13:47.549 "trtype": "$TEST_TRANSPORT", 00:13:47.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.549 "adrfam": "ipv4", 00:13:47.549 "trsvcid": "$NVMF_PORT", 00:13:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.549 "hdgst": ${hdgst:-false}, 00:13:47.549 "ddgst": ${ddgst:-false} 00:13:47.549 }, 00:13:47.549 "method": "bdev_nvme_attach_controller" 00:13:47.549 } 00:13:47.549 EOF 00:13:47.549 )") 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:47.549 21:12:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:47.549 "params": { 00:13:47.549 "name": "Nvme1", 00:13:47.549 "trtype": "tcp", 00:13:47.549 "traddr": "10.0.0.2", 00:13:47.549 "adrfam": "ipv4", 00:13:47.549 "trsvcid": "4420", 00:13:47.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.549 "hdgst": false, 00:13:47.549 "ddgst": false 00:13:47.549 }, 00:13:47.549 "method": "bdev_nvme_attach_controller" 00:13:47.549 }' 00:13:47.549 [2024-07-14 21:12:58.998167] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:47.549 [2024-07-14 21:12:58.998356] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71884 ] 00:13:47.808 [2024-07-14 21:12:59.170686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.067 [2024-07-14 21:12:59.389809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.067 [2024-07-14 21:12:59.389940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.067 [2024-07-14 21:12:59.389944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.067 [2024-07-14 21:12:59.576721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.326 I/O targets: 00:13:48.326 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:48.326 00:13:48.326 00:13:48.327 CUnit - A unit testing framework for C - Version 2.1-3 00:13:48.327 http://cunit.sourceforge.net/ 00:13:48.327 00:13:48.327 00:13:48.327 Suite: bdevio tests on: Nvme1n1 00:13:48.327 Test: blockdev write read block ...passed 00:13:48.327 Test: blockdev write zeroes read block ...passed 00:13:48.327 Test: blockdev write zeroes read no split ...passed 00:13:48.327 Test: blockdev write zeroes read split ...passed 00:13:48.327 Test: blockdev write zeroes read split partial ...passed 00:13:48.327 Test: blockdev reset ...[2024-07-14 21:12:59.830192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:48.327 [2024-07-14 21:12:59.830447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:48.327 [2024-07-14 21:12:59.844585] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:48.327 passed 00:13:48.327 Test: blockdev write read 8 blocks ...passed 00:13:48.327 Test: blockdev write read size > 128k ...passed 00:13:48.327 Test: blockdev write read invalid size ...passed 00:13:48.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.327 Test: blockdev write read max offset ...passed 00:13:48.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.327 Test: blockdev writev readv 8 blocks ...passed 00:13:48.327 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.327 Test: blockdev writev readv block ...passed 00:13:48.327 Test: blockdev writev readv size > 128k ...passed 00:13:48.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.327 Test: blockdev comparev and writev ...[2024-07-14 21:12:59.856232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.856332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.856365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.856386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.856993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.857022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.857041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.857474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.857520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.857549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.857571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.858061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.858153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.858181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:48.327 [2024-07-14 21:12:59.858200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:48.327 passed 00:13:48.327 Test: blockdev nvme passthru rw ...passed 00:13:48.327 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:12:59.859271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.327 [2024-07-14 21:12:59.859322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.859480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.327 [2024-07-14 21:12:59.859519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.859667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.327 [2024-07-14 21:12:59.859697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:48.327 [2024-07-14 21:12:59.859887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.327 [2024-07-14 21:12:59.859928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:48.327 passed 00:13:48.327 Test: blockdev nvme admin passthru ...passed 00:13:48.327 Test: blockdev copy ...passed 00:13:48.327 00:13:48.327 Run Summary: Type Total Ran Passed Failed Inactive 00:13:48.327 suites 1 1 n/a 0 0 00:13:48.327 tests 23 23 23 0 0 00:13:48.327 asserts 152 152 152 0 n/a 00:13:48.327 00:13:48.327 Elapsed time = 0.267 seconds 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.705 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.706 rmmod nvme_tcp 00:13:49.706 rmmod nvme_fabrics 00:13:49.706 rmmod nvme_keyring 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 71848 ']' 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 71848 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 71848 ']' 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 71848 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.706 21:13:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71848 00:13:49.706 21:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:49.706 21:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:49.706 killing process with pid 71848 00:13:49.706 21:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71848' 00:13:49.706 21:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 71848 00:13:49.706 21:13:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 71848 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.647 21:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.907 21:13:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:50.907 00:13:50.907 real 0m4.975s 00:13:50.907 user 0m18.572s 00:13:50.907 sys 0m0.943s 00:13:50.907 21:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.907 ************************************ 00:13:50.907 END TEST nvmf_bdevio 00:13:50.907 ************************************ 00:13:50.907 21:13:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 21:13:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.907 21:13:02 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:50.907 21:13:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.907 21:13:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.907 21:13:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 ************************************ 00:13:50.907 START TEST nvmf_auth_target 00:13:50.907 ************************************ 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:50.907 * Looking for test storage... 00:13:50.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.907 21:13:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:50.908 Cannot find device "nvmf_tgt_br" 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.908 Cannot find device "nvmf_tgt_br2" 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:50.908 Cannot find device "nvmf_tgt_br" 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:50.908 Cannot find device "nvmf_tgt_br2" 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:13:50.908 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:51.167 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:51.167 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.167 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:51.167 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:51.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:51.168 00:13:51.168 --- 10.0.0.2 ping statistics --- 00:13:51.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.168 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:51.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:51.168 00:13:51.168 --- 10.0.0.3 ping statistics --- 00:13:51.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.168 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:51.168 00:13:51.168 --- 10.0.0.1 ping statistics --- 00:13:51.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.168 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.168 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72110 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72110 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72110 ']' 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.427 21:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=72141 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:52.363 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=079b319de0fb8b697163f994760c5a0b87220bd415581c7c 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zVi 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 079b319de0fb8b697163f994760c5a0b87220bd415581c7c 0 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 079b319de0fb8b697163f994760c5a0b87220bd415581c7c 0 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=079b319de0fb8b697163f994760c5a0b87220bd415581c7c 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zVi 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zVi 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.zVi 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=228f026ec562033dc59995ac4a60b844256c8214126933692998d16156c6951a 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5oN 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 228f026ec562033dc59995ac4a60b844256c8214126933692998d16156c6951a 3 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 228f026ec562033dc59995ac4a60b844256c8214126933692998d16156c6951a 3 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=228f026ec562033dc59995ac4a60b844256c8214126933692998d16156c6951a 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.364 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5oN 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5oN 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.5oN 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bf42fb06cca32b5ef29265dc51851a80 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FNH 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bf42fb06cca32b5ef29265dc51851a80 1 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bf42fb06cca32b5ef29265dc51851a80 1 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bf42fb06cca32b5ef29265dc51851a80 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FNH 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FNH 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.FNH 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19523403af41d9b81f953672aeea7ad1bbcef727cefb61b4 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oKJ 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19523403af41d9b81f953672aeea7ad1bbcef727cefb61b4 2 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19523403af41d9b81f953672aeea7ad1bbcef727cefb61b4 2 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19523403af41d9b81f953672aeea7ad1bbcef727cefb61b4 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:52.624 21:13:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oKJ 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oKJ 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.oKJ 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5c39036f2891b2a6ec823653ed4ee3e392e606721350b12b 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PZo 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5c39036f2891b2a6ec823653ed4ee3e392e606721350b12b 2 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5c39036f2891b2a6ec823653ed4ee3e392e606721350b12b 2 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5c39036f2891b2a6ec823653ed4ee3e392e606721350b12b 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PZo 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PZo 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.PZo 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=709801dc7a57eb2cf4a26b1cf9bf121b 00:13:52.624 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DCE 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 709801dc7a57eb2cf4a26b1cf9bf121b 1 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 709801dc7a57eb2cf4a26b1cf9bf121b 1 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=709801dc7a57eb2cf4a26b1cf9bf121b 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:52.625 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DCE 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DCE 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.DCE 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8e693d16102003cce08281133b948624a5596987212991b681eee90c6119195 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gsJ 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8e693d16102003cce08281133b948624a5596987212991b681eee90c6119195 3 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8e693d16102003cce08281133b948624a5596987212991b681eee90c6119195 3 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8e693d16102003cce08281133b948624a5596987212991b681eee90c6119195 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gsJ 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gsJ 00:13:52.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gsJ 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 72110 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72110 ']' 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.884 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:53.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 72141 /var/tmp/host.sock 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72141 ']' 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.143 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zVi 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zVi 00:13:53.708 21:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zVi 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.5oN ]] 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5oN 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5oN 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5oN 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.FNH 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.FNH 00:13:53.966 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.FNH 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.oKJ ]] 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oKJ 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oKJ 00:13:54.224 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oKJ 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.PZo 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.PZo 00:13:54.482 21:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.PZo 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.DCE ]] 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DCE 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DCE 00:13:54.740 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DCE 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gsJ 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gsJ 00:13:54.997 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gsJ 00:13:55.255 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:55.255 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:55.255 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.255 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.255 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.255 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.512 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:55.512 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.512 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.513 21:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.770 00:13:55.771 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.771 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.771 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.029 { 00:13:56.029 "cntlid": 1, 00:13:56.029 "qid": 0, 00:13:56.029 "state": "enabled", 00:13:56.029 "thread": "nvmf_tgt_poll_group_000", 00:13:56.029 "listen_address": { 00:13:56.029 "trtype": "TCP", 00:13:56.029 "adrfam": "IPv4", 00:13:56.029 "traddr": "10.0.0.2", 00:13:56.029 "trsvcid": "4420" 00:13:56.029 }, 00:13:56.029 "peer_address": { 00:13:56.029 "trtype": "TCP", 00:13:56.029 "adrfam": "IPv4", 00:13:56.029 "traddr": "10.0.0.1", 00:13:56.029 "trsvcid": "56770" 00:13:56.029 }, 00:13:56.029 "auth": { 00:13:56.029 "state": "completed", 00:13:56.029 "digest": "sha256", 00:13:56.029 "dhgroup": "null" 00:13:56.029 } 00:13:56.029 } 00:13:56.029 ]' 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.029 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.287 21:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.470 21:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.751 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.008 00:14:01.008 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.008 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.008 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.266 { 00:14:01.266 "cntlid": 3, 00:14:01.266 "qid": 0, 00:14:01.266 "state": "enabled", 00:14:01.266 "thread": "nvmf_tgt_poll_group_000", 00:14:01.266 "listen_address": { 00:14:01.266 "trtype": "TCP", 00:14:01.266 "adrfam": "IPv4", 00:14:01.266 "traddr": "10.0.0.2", 00:14:01.266 "trsvcid": "4420" 00:14:01.266 }, 00:14:01.266 "peer_address": { 00:14:01.266 "trtype": "TCP", 00:14:01.266 "adrfam": "IPv4", 00:14:01.266 "traddr": "10.0.0.1", 00:14:01.266 "trsvcid": "59670" 00:14:01.266 }, 00:14:01.266 "auth": { 00:14:01.266 "state": "completed", 00:14:01.266 "digest": "sha256", 00:14:01.266 "dhgroup": "null" 00:14:01.266 } 00:14:01.266 } 00:14:01.266 ]' 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.266 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.524 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:01.524 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.524 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.524 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.524 21:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.782 21:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:02.346 21:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.346 21:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:02.347 21:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.347 21:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.347 21:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.347 21:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.347 21:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.347 21:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.604 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.861 00:14:02.861 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.861 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.861 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.119 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.119 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.119 21:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.119 21:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.377 { 00:14:03.377 "cntlid": 5, 00:14:03.377 "qid": 0, 00:14:03.377 "state": "enabled", 00:14:03.377 "thread": "nvmf_tgt_poll_group_000", 00:14:03.377 "listen_address": { 00:14:03.377 "trtype": "TCP", 00:14:03.377 "adrfam": "IPv4", 00:14:03.377 "traddr": "10.0.0.2", 00:14:03.377 "trsvcid": "4420" 00:14:03.377 }, 00:14:03.377 "peer_address": { 00:14:03.377 "trtype": "TCP", 00:14:03.377 "adrfam": "IPv4", 00:14:03.377 "traddr": "10.0.0.1", 00:14:03.377 "trsvcid": "59696" 00:14:03.377 }, 00:14:03.377 "auth": { 00:14:03.377 "state": "completed", 00:14:03.377 "digest": "sha256", 00:14:03.377 "dhgroup": "null" 00:14:03.377 } 00:14:03.377 } 00:14:03.377 ]' 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.377 21:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.635 21:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:04.201 21:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.458 21:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.716 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:04.716 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.717 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.975 00:14:04.975 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.975 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.975 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.232 { 00:14:05.232 "cntlid": 7, 00:14:05.232 "qid": 0, 00:14:05.232 "state": "enabled", 00:14:05.232 "thread": "nvmf_tgt_poll_group_000", 00:14:05.232 "listen_address": { 00:14:05.232 "trtype": "TCP", 00:14:05.232 "adrfam": "IPv4", 00:14:05.232 "traddr": "10.0.0.2", 00:14:05.232 "trsvcid": "4420" 00:14:05.232 }, 00:14:05.232 "peer_address": { 00:14:05.232 "trtype": "TCP", 00:14:05.232 "adrfam": "IPv4", 00:14:05.232 "traddr": "10.0.0.1", 00:14:05.232 "trsvcid": "59730" 00:14:05.232 }, 00:14:05.232 "auth": { 00:14:05.232 "state": "completed", 00:14:05.232 "digest": "sha256", 00:14:05.232 "dhgroup": "null" 00:14:05.232 } 00:14:05.232 } 00:14:05.232 ]' 00:14:05.232 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.233 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.490 21:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.424 21:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.991 00:14:06.991 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.991 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.991 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.249 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.249 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.249 21:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.249 21:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.249 21:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.249 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.249 { 00:14:07.249 "cntlid": 9, 00:14:07.249 "qid": 0, 00:14:07.249 "state": "enabled", 00:14:07.249 "thread": "nvmf_tgt_poll_group_000", 00:14:07.249 "listen_address": { 00:14:07.249 "trtype": "TCP", 00:14:07.250 "adrfam": "IPv4", 00:14:07.250 "traddr": "10.0.0.2", 00:14:07.250 "trsvcid": "4420" 00:14:07.250 }, 00:14:07.250 "peer_address": { 00:14:07.250 "trtype": "TCP", 00:14:07.250 "adrfam": "IPv4", 00:14:07.250 "traddr": "10.0.0.1", 00:14:07.250 "trsvcid": "59748" 00:14:07.250 }, 00:14:07.250 "auth": { 00:14:07.250 "state": "completed", 00:14:07.250 "digest": "sha256", 00:14:07.250 "dhgroup": "ffdhe2048" 00:14:07.250 } 00:14:07.250 } 00:14:07.250 ]' 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.250 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.508 21:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:08.075 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.075 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:08.075 21:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.075 21:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.333 21:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.333 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.333 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.333 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.591 21:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.850 00:14:08.850 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.850 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.850 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.108 { 00:14:09.108 "cntlid": 11, 00:14:09.108 "qid": 0, 00:14:09.108 "state": "enabled", 00:14:09.108 "thread": "nvmf_tgt_poll_group_000", 00:14:09.108 "listen_address": { 00:14:09.108 "trtype": "TCP", 00:14:09.108 "adrfam": "IPv4", 00:14:09.108 "traddr": "10.0.0.2", 00:14:09.108 "trsvcid": "4420" 00:14:09.108 }, 00:14:09.108 "peer_address": { 00:14:09.108 "trtype": "TCP", 00:14:09.108 "adrfam": "IPv4", 00:14:09.108 "traddr": "10.0.0.1", 00:14:09.108 "trsvcid": "59790" 00:14:09.108 }, 00:14:09.108 "auth": { 00:14:09.108 "state": "completed", 00:14:09.108 "digest": "sha256", 00:14:09.108 "dhgroup": "ffdhe2048" 00:14:09.108 } 00:14:09.108 } 00:14:09.108 ]' 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.108 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.367 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.367 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.367 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.627 21:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.192 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.450 21:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.016 00:14:11.016 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.016 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.016 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.274 { 00:14:11.274 "cntlid": 13, 00:14:11.274 "qid": 0, 00:14:11.274 "state": "enabled", 00:14:11.274 "thread": "nvmf_tgt_poll_group_000", 00:14:11.274 "listen_address": { 00:14:11.274 "trtype": "TCP", 00:14:11.274 "adrfam": "IPv4", 00:14:11.274 "traddr": "10.0.0.2", 00:14:11.274 "trsvcid": "4420" 00:14:11.274 }, 00:14:11.274 "peer_address": { 00:14:11.274 "trtype": "TCP", 00:14:11.274 "adrfam": "IPv4", 00:14:11.274 "traddr": "10.0.0.1", 00:14:11.274 "trsvcid": "40796" 00:14:11.274 }, 00:14:11.274 "auth": { 00:14:11.274 "state": "completed", 00:14:11.274 "digest": "sha256", 00:14:11.274 "dhgroup": "ffdhe2048" 00:14:11.274 } 00:14:11.274 } 00:14:11.274 ]' 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.274 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.539 21:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:12.143 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.144 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.402 21:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.970 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.970 { 00:14:12.970 "cntlid": 15, 00:14:12.970 "qid": 0, 00:14:12.970 "state": "enabled", 00:14:12.970 "thread": "nvmf_tgt_poll_group_000", 00:14:12.970 "listen_address": { 00:14:12.970 "trtype": "TCP", 00:14:12.970 "adrfam": "IPv4", 00:14:12.970 "traddr": "10.0.0.2", 00:14:12.970 "trsvcid": "4420" 00:14:12.970 }, 00:14:12.970 "peer_address": { 00:14:12.970 "trtype": "TCP", 00:14:12.970 "adrfam": "IPv4", 00:14:12.970 "traddr": "10.0.0.1", 00:14:12.970 "trsvcid": "40836" 00:14:12.970 }, 00:14:12.970 "auth": { 00:14:12.970 "state": "completed", 00:14:12.970 "digest": "sha256", 00:14:12.970 "dhgroup": "ffdhe2048" 00:14:12.970 } 00:14:12.970 } 00:14:12.970 ]' 00:14:12.970 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.229 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.488 21:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.057 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.316 21:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.575 00:14:14.575 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.575 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.575 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.834 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.834 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.834 21:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.834 21:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.834 21:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.834 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.834 { 00:14:14.834 "cntlid": 17, 00:14:14.834 "qid": 0, 00:14:14.834 "state": "enabled", 00:14:14.834 "thread": "nvmf_tgt_poll_group_000", 00:14:14.834 "listen_address": { 00:14:14.834 "trtype": "TCP", 00:14:14.834 "adrfam": "IPv4", 00:14:14.834 "traddr": "10.0.0.2", 00:14:14.834 "trsvcid": "4420" 00:14:14.834 }, 00:14:14.834 "peer_address": { 00:14:14.834 "trtype": "TCP", 00:14:14.834 "adrfam": "IPv4", 00:14:14.834 "traddr": "10.0.0.1", 00:14:14.834 "trsvcid": "40874" 00:14:14.834 }, 00:14:14.834 "auth": { 00:14:14.834 "state": "completed", 00:14:14.834 "digest": "sha256", 00:14:14.834 "dhgroup": "ffdhe3072" 00:14:14.834 } 00:14:14.834 } 00:14:14.834 ]' 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.093 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.094 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.351 21:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.916 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.173 21:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.174 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.174 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.432 00:14:16.432 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.432 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.432 21:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.998 { 00:14:16.998 "cntlid": 19, 00:14:16.998 "qid": 0, 00:14:16.998 "state": "enabled", 00:14:16.998 "thread": "nvmf_tgt_poll_group_000", 00:14:16.998 "listen_address": { 00:14:16.998 "trtype": "TCP", 00:14:16.998 "adrfam": "IPv4", 00:14:16.998 "traddr": "10.0.0.2", 00:14:16.998 "trsvcid": "4420" 00:14:16.998 }, 00:14:16.998 "peer_address": { 00:14:16.998 "trtype": "TCP", 00:14:16.998 "adrfam": "IPv4", 00:14:16.998 "traddr": "10.0.0.1", 00:14:16.998 "trsvcid": "40902" 00:14:16.998 }, 00:14:16.998 "auth": { 00:14:16.998 "state": "completed", 00:14:16.998 "digest": "sha256", 00:14:16.998 "dhgroup": "ffdhe3072" 00:14:16.998 } 00:14:16.998 } 00:14:16.998 ]' 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.998 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.257 21:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.823 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.081 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.648 00:14:18.648 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.648 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.648 21:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.907 { 00:14:18.907 "cntlid": 21, 00:14:18.907 "qid": 0, 00:14:18.907 "state": "enabled", 00:14:18.907 "thread": "nvmf_tgt_poll_group_000", 00:14:18.907 "listen_address": { 00:14:18.907 "trtype": "TCP", 00:14:18.907 "adrfam": "IPv4", 00:14:18.907 "traddr": "10.0.0.2", 00:14:18.907 "trsvcid": "4420" 00:14:18.907 }, 00:14:18.907 "peer_address": { 00:14:18.907 "trtype": "TCP", 00:14:18.907 "adrfam": "IPv4", 00:14:18.907 "traddr": "10.0.0.1", 00:14:18.907 "trsvcid": "40934" 00:14:18.907 }, 00:14:18.907 "auth": { 00:14:18.907 "state": "completed", 00:14:18.907 "digest": "sha256", 00:14:18.907 "dhgroup": "ffdhe3072" 00:14:18.907 } 00:14:18.907 } 00:14:18.907 ]' 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.907 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.166 21:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.099 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:20.100 21:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.100 21:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.100 21:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.100 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.100 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.666 00:14:20.666 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.666 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.666 21:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.666 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.924 { 00:14:20.924 "cntlid": 23, 00:14:20.924 "qid": 0, 00:14:20.924 "state": "enabled", 00:14:20.924 "thread": "nvmf_tgt_poll_group_000", 00:14:20.924 "listen_address": { 00:14:20.924 "trtype": "TCP", 00:14:20.924 "adrfam": "IPv4", 00:14:20.924 "traddr": "10.0.0.2", 00:14:20.924 "trsvcid": "4420" 00:14:20.924 }, 00:14:20.924 "peer_address": { 00:14:20.924 "trtype": "TCP", 00:14:20.924 "adrfam": "IPv4", 00:14:20.924 "traddr": "10.0.0.1", 00:14:20.924 "trsvcid": "38704" 00:14:20.924 }, 00:14:20.924 "auth": { 00:14:20.924 "state": "completed", 00:14:20.924 "digest": "sha256", 00:14:20.924 "dhgroup": "ffdhe3072" 00:14:20.924 } 00:14:20.924 } 00:14:20.924 ]' 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.924 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.183 21:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.750 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.008 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.265 00:14:22.265 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.265 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.265 21:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.524 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.524 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.524 21:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.524 21:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.783 { 00:14:22.783 "cntlid": 25, 00:14:22.783 "qid": 0, 00:14:22.783 "state": "enabled", 00:14:22.783 "thread": "nvmf_tgt_poll_group_000", 00:14:22.783 "listen_address": { 00:14:22.783 "trtype": "TCP", 00:14:22.783 "adrfam": "IPv4", 00:14:22.783 "traddr": "10.0.0.2", 00:14:22.783 "trsvcid": "4420" 00:14:22.783 }, 00:14:22.783 "peer_address": { 00:14:22.783 "trtype": "TCP", 00:14:22.783 "adrfam": "IPv4", 00:14:22.783 "traddr": "10.0.0.1", 00:14:22.783 "trsvcid": "38736" 00:14:22.783 }, 00:14:22.783 "auth": { 00:14:22.783 "state": "completed", 00:14:22.783 "digest": "sha256", 00:14:22.783 "dhgroup": "ffdhe4096" 00:14:22.783 } 00:14:22.783 } 00:14:22.783 ]' 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.783 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.041 21:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.608 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.866 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.125 00:14:24.125 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.125 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.125 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.384 { 00:14:24.384 "cntlid": 27, 00:14:24.384 "qid": 0, 00:14:24.384 "state": "enabled", 00:14:24.384 "thread": "nvmf_tgt_poll_group_000", 00:14:24.384 "listen_address": { 00:14:24.384 "trtype": "TCP", 00:14:24.384 "adrfam": "IPv4", 00:14:24.384 "traddr": "10.0.0.2", 00:14:24.384 "trsvcid": "4420" 00:14:24.384 }, 00:14:24.384 "peer_address": { 00:14:24.384 "trtype": "TCP", 00:14:24.384 "adrfam": "IPv4", 00:14:24.384 "traddr": "10.0.0.1", 00:14:24.384 "trsvcid": "38762" 00:14:24.384 }, 00:14:24.384 "auth": { 00:14:24.384 "state": "completed", 00:14:24.384 "digest": "sha256", 00:14:24.384 "dhgroup": "ffdhe4096" 00:14:24.384 } 00:14:24.384 } 00:14:24.384 ]' 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.384 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.643 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.643 21:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.643 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.643 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.643 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.902 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.470 21:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.729 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.987 00:14:25.987 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.987 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.987 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.245 { 00:14:26.245 "cntlid": 29, 00:14:26.245 "qid": 0, 00:14:26.245 "state": "enabled", 00:14:26.245 "thread": "nvmf_tgt_poll_group_000", 00:14:26.245 "listen_address": { 00:14:26.245 "trtype": "TCP", 00:14:26.245 "adrfam": "IPv4", 00:14:26.245 "traddr": "10.0.0.2", 00:14:26.245 "trsvcid": "4420" 00:14:26.245 }, 00:14:26.245 "peer_address": { 00:14:26.245 "trtype": "TCP", 00:14:26.245 "adrfam": "IPv4", 00:14:26.245 "traddr": "10.0.0.1", 00:14:26.245 "trsvcid": "38806" 00:14:26.245 }, 00:14:26.245 "auth": { 00:14:26.245 "state": "completed", 00:14:26.245 "digest": "sha256", 00:14:26.245 "dhgroup": "ffdhe4096" 00:14:26.245 } 00:14:26.245 } 00:14:26.245 ]' 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.245 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.246 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.246 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.246 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.567 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.567 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.567 21:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.829 21:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.395 21:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.654 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.912 00:14:27.913 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.913 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.913 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.171 { 00:14:28.171 "cntlid": 31, 00:14:28.171 "qid": 0, 00:14:28.171 "state": "enabled", 00:14:28.171 "thread": "nvmf_tgt_poll_group_000", 00:14:28.171 "listen_address": { 00:14:28.171 "trtype": "TCP", 00:14:28.171 "adrfam": "IPv4", 00:14:28.171 "traddr": "10.0.0.2", 00:14:28.171 "trsvcid": "4420" 00:14:28.171 }, 00:14:28.171 "peer_address": { 00:14:28.171 "trtype": "TCP", 00:14:28.171 "adrfam": "IPv4", 00:14:28.171 "traddr": "10.0.0.1", 00:14:28.171 "trsvcid": "38830" 00:14:28.171 }, 00:14:28.171 "auth": { 00:14:28.171 "state": "completed", 00:14:28.171 "digest": "sha256", 00:14:28.171 "dhgroup": "ffdhe4096" 00:14:28.171 } 00:14:28.171 } 00:14:28.171 ]' 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.171 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.430 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.430 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.430 21:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.688 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.255 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.513 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:29.513 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.513 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.513 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:29.513 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.514 21:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.081 00:14:30.081 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.081 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.081 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.340 { 00:14:30.340 "cntlid": 33, 00:14:30.340 "qid": 0, 00:14:30.340 "state": "enabled", 00:14:30.340 "thread": "nvmf_tgt_poll_group_000", 00:14:30.340 "listen_address": { 00:14:30.340 "trtype": "TCP", 00:14:30.340 "adrfam": "IPv4", 00:14:30.340 "traddr": "10.0.0.2", 00:14:30.340 "trsvcid": "4420" 00:14:30.340 }, 00:14:30.340 "peer_address": { 00:14:30.340 "trtype": "TCP", 00:14:30.340 "adrfam": "IPv4", 00:14:30.340 "traddr": "10.0.0.1", 00:14:30.340 "trsvcid": "38854" 00:14:30.340 }, 00:14:30.340 "auth": { 00:14:30.340 "state": "completed", 00:14:30.340 "digest": "sha256", 00:14:30.340 "dhgroup": "ffdhe6144" 00:14:30.340 } 00:14:30.340 } 00:14:30.340 ]' 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.340 21:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.598 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.165 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.422 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.423 21:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.988 00:14:31.988 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.988 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.988 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.246 { 00:14:32.246 "cntlid": 35, 00:14:32.246 "qid": 0, 00:14:32.246 "state": "enabled", 00:14:32.246 "thread": "nvmf_tgt_poll_group_000", 00:14:32.246 "listen_address": { 00:14:32.246 "trtype": "TCP", 00:14:32.246 "adrfam": "IPv4", 00:14:32.246 "traddr": "10.0.0.2", 00:14:32.246 "trsvcid": "4420" 00:14:32.246 }, 00:14:32.246 "peer_address": { 00:14:32.246 "trtype": "TCP", 00:14:32.246 "adrfam": "IPv4", 00:14:32.246 "traddr": "10.0.0.1", 00:14:32.246 "trsvcid": "42972" 00:14:32.246 }, 00:14:32.246 "auth": { 00:14:32.246 "state": "completed", 00:14:32.246 "digest": "sha256", 00:14:32.246 "dhgroup": "ffdhe6144" 00:14:32.246 } 00:14:32.246 } 00:14:32.246 ]' 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.246 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.503 21:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.069 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.327 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:33.327 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.327 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.328 21:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.895 00:14:33.895 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.895 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.895 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.153 { 00:14:34.153 "cntlid": 37, 00:14:34.153 "qid": 0, 00:14:34.153 "state": "enabled", 00:14:34.153 "thread": "nvmf_tgt_poll_group_000", 00:14:34.153 "listen_address": { 00:14:34.153 "trtype": "TCP", 00:14:34.153 "adrfam": "IPv4", 00:14:34.153 "traddr": "10.0.0.2", 00:14:34.153 "trsvcid": "4420" 00:14:34.153 }, 00:14:34.153 "peer_address": { 00:14:34.153 "trtype": "TCP", 00:14:34.153 "adrfam": "IPv4", 00:14:34.153 "traddr": "10.0.0.1", 00:14:34.153 "trsvcid": "43014" 00:14:34.153 }, 00:14:34.153 "auth": { 00:14:34.153 "state": "completed", 00:14:34.153 "digest": "sha256", 00:14:34.153 "dhgroup": "ffdhe6144" 00:14:34.153 } 00:14:34.153 } 00:14:34.153 ]' 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.153 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.411 21:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.347 21:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.606 21:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.606 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.606 21:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.865 00:14:35.865 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.865 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.865 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.125 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.125 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.125 21:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.125 21:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.384 { 00:14:36.384 "cntlid": 39, 00:14:36.384 "qid": 0, 00:14:36.384 "state": "enabled", 00:14:36.384 "thread": "nvmf_tgt_poll_group_000", 00:14:36.384 "listen_address": { 00:14:36.384 "trtype": "TCP", 00:14:36.384 "adrfam": "IPv4", 00:14:36.384 "traddr": "10.0.0.2", 00:14:36.384 "trsvcid": "4420" 00:14:36.384 }, 00:14:36.384 "peer_address": { 00:14:36.384 "trtype": "TCP", 00:14:36.384 "adrfam": "IPv4", 00:14:36.384 "traddr": "10.0.0.1", 00:14:36.384 "trsvcid": "43038" 00:14:36.384 }, 00:14:36.384 "auth": { 00:14:36.384 "state": "completed", 00:14:36.384 "digest": "sha256", 00:14:36.384 "dhgroup": "ffdhe6144" 00:14:36.384 } 00:14:36.384 } 00:14:36.384 ]' 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.384 21:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.644 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.212 21:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.472 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.410 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.410 { 00:14:38.410 "cntlid": 41, 00:14:38.410 "qid": 0, 00:14:38.410 "state": "enabled", 00:14:38.410 "thread": "nvmf_tgt_poll_group_000", 00:14:38.410 "listen_address": { 00:14:38.410 "trtype": "TCP", 00:14:38.410 "adrfam": "IPv4", 00:14:38.410 "traddr": "10.0.0.2", 00:14:38.410 "trsvcid": "4420" 00:14:38.410 }, 00:14:38.410 "peer_address": { 00:14:38.410 "trtype": "TCP", 00:14:38.410 "adrfam": "IPv4", 00:14:38.410 "traddr": "10.0.0.1", 00:14:38.410 "trsvcid": "43062" 00:14:38.410 }, 00:14:38.410 "auth": { 00:14:38.410 "state": "completed", 00:14:38.410 "digest": "sha256", 00:14:38.410 "dhgroup": "ffdhe8192" 00:14:38.410 } 00:14:38.410 } 00:14:38.410 ]' 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.410 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.669 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.669 21:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.669 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.669 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.669 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.927 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.495 21:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.753 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.319 00:14:40.319 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.319 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.319 21:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.578 { 00:14:40.578 "cntlid": 43, 00:14:40.578 "qid": 0, 00:14:40.578 "state": "enabled", 00:14:40.578 "thread": "nvmf_tgt_poll_group_000", 00:14:40.578 "listen_address": { 00:14:40.578 "trtype": "TCP", 00:14:40.578 "adrfam": "IPv4", 00:14:40.578 "traddr": "10.0.0.2", 00:14:40.578 "trsvcid": "4420" 00:14:40.578 }, 00:14:40.578 "peer_address": { 00:14:40.578 "trtype": "TCP", 00:14:40.578 "adrfam": "IPv4", 00:14:40.578 "traddr": "10.0.0.1", 00:14:40.578 "trsvcid": "40132" 00:14:40.578 }, 00:14:40.578 "auth": { 00:14:40.578 "state": "completed", 00:14:40.578 "digest": "sha256", 00:14:40.578 "dhgroup": "ffdhe8192" 00:14:40.578 } 00:14:40.578 } 00:14:40.578 ]' 00:14:40.578 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.836 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.095 21:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:41.662 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.663 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.921 21:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.486 00:14:42.486 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.486 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.486 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.744 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.745 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.745 21:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.745 21:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.003 { 00:14:43.003 "cntlid": 45, 00:14:43.003 "qid": 0, 00:14:43.003 "state": "enabled", 00:14:43.003 "thread": "nvmf_tgt_poll_group_000", 00:14:43.003 "listen_address": { 00:14:43.003 "trtype": "TCP", 00:14:43.003 "adrfam": "IPv4", 00:14:43.003 "traddr": "10.0.0.2", 00:14:43.003 "trsvcid": "4420" 00:14:43.003 }, 00:14:43.003 "peer_address": { 00:14:43.003 "trtype": "TCP", 00:14:43.003 "adrfam": "IPv4", 00:14:43.003 "traddr": "10.0.0.1", 00:14:43.003 "trsvcid": "40176" 00:14:43.003 }, 00:14:43.003 "auth": { 00:14:43.003 "state": "completed", 00:14:43.003 "digest": "sha256", 00:14:43.003 "dhgroup": "ffdhe8192" 00:14:43.003 } 00:14:43.003 } 00:14:43.003 ]' 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.003 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.261 21:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.826 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.085 21:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.652 00:14:44.652 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.652 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.652 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.910 { 00:14:44.910 "cntlid": 47, 00:14:44.910 "qid": 0, 00:14:44.910 "state": "enabled", 00:14:44.910 "thread": "nvmf_tgt_poll_group_000", 00:14:44.910 "listen_address": { 00:14:44.910 "trtype": "TCP", 00:14:44.910 "adrfam": "IPv4", 00:14:44.910 "traddr": "10.0.0.2", 00:14:44.910 "trsvcid": "4420" 00:14:44.910 }, 00:14:44.910 "peer_address": { 00:14:44.910 "trtype": "TCP", 00:14:44.910 "adrfam": "IPv4", 00:14:44.910 "traddr": "10.0.0.1", 00:14:44.910 "trsvcid": "40206" 00:14:44.910 }, 00:14:44.910 "auth": { 00:14:44.910 "state": "completed", 00:14:44.910 "digest": "sha256", 00:14:44.910 "dhgroup": "ffdhe8192" 00:14:44.910 } 00:14:44.910 } 00:14:44.910 ]' 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.910 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.169 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.169 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.169 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.169 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.169 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.428 21:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:45.995 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.253 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.511 00:14:46.511 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.511 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.511 21:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.772 { 00:14:46.772 "cntlid": 49, 00:14:46.772 "qid": 0, 00:14:46.772 "state": "enabled", 00:14:46.772 "thread": "nvmf_tgt_poll_group_000", 00:14:46.772 "listen_address": { 00:14:46.772 "trtype": "TCP", 00:14:46.772 "adrfam": "IPv4", 00:14:46.772 "traddr": "10.0.0.2", 00:14:46.772 "trsvcid": "4420" 00:14:46.772 }, 00:14:46.772 "peer_address": { 00:14:46.772 "trtype": "TCP", 00:14:46.772 "adrfam": "IPv4", 00:14:46.772 "traddr": "10.0.0.1", 00:14:46.772 "trsvcid": "40236" 00:14:46.772 }, 00:14:46.772 "auth": { 00:14:46.772 "state": "completed", 00:14:46.772 "digest": "sha384", 00:14:46.772 "dhgroup": "null" 00:14:46.772 } 00:14:46.772 } 00:14:46.772 ]' 00:14:46.772 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.031 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.290 21:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:48.226 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.226 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.227 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.486 00:14:48.486 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.486 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.486 21:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.745 { 00:14:48.745 "cntlid": 51, 00:14:48.745 "qid": 0, 00:14:48.745 "state": "enabled", 00:14:48.745 "thread": "nvmf_tgt_poll_group_000", 00:14:48.745 "listen_address": { 00:14:48.745 "trtype": "TCP", 00:14:48.745 "adrfam": "IPv4", 00:14:48.745 "traddr": "10.0.0.2", 00:14:48.745 "trsvcid": "4420" 00:14:48.745 }, 00:14:48.745 "peer_address": { 00:14:48.745 "trtype": "TCP", 00:14:48.745 "adrfam": "IPv4", 00:14:48.745 "traddr": "10.0.0.1", 00:14:48.745 "trsvcid": "40262" 00:14:48.745 }, 00:14:48.745 "auth": { 00:14:48.745 "state": "completed", 00:14:48.745 "digest": "sha384", 00:14:48.745 "dhgroup": "null" 00:14:48.745 } 00:14:48.745 } 00:14:48.745 ]' 00:14:48.745 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.004 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.263 21:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.199 21:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.864 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.864 { 00:14:50.864 "cntlid": 53, 00:14:50.864 "qid": 0, 00:14:50.864 "state": "enabled", 00:14:50.864 "thread": "nvmf_tgt_poll_group_000", 00:14:50.864 "listen_address": { 00:14:50.864 "trtype": "TCP", 00:14:50.864 "adrfam": "IPv4", 00:14:50.864 "traddr": "10.0.0.2", 00:14:50.864 "trsvcid": "4420" 00:14:50.864 }, 00:14:50.864 "peer_address": { 00:14:50.864 "trtype": "TCP", 00:14:50.864 "adrfam": "IPv4", 00:14:50.864 "traddr": "10.0.0.1", 00:14:50.864 "trsvcid": "40808" 00:14:50.864 }, 00:14:50.864 "auth": { 00:14:50.864 "state": "completed", 00:14:50.864 "digest": "sha384", 00:14:50.864 "dhgroup": "null" 00:14:50.864 } 00:14:50.864 } 00:14:50.864 ]' 00:14:50.864 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.122 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.380 21:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.948 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.208 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.467 00:14:52.467 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.467 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.467 21:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.726 { 00:14:52.726 "cntlid": 55, 00:14:52.726 "qid": 0, 00:14:52.726 "state": "enabled", 00:14:52.726 "thread": "nvmf_tgt_poll_group_000", 00:14:52.726 "listen_address": { 00:14:52.726 "trtype": "TCP", 00:14:52.726 "adrfam": "IPv4", 00:14:52.726 "traddr": "10.0.0.2", 00:14:52.726 "trsvcid": "4420" 00:14:52.726 }, 00:14:52.726 "peer_address": { 00:14:52.726 "trtype": "TCP", 00:14:52.726 "adrfam": "IPv4", 00:14:52.726 "traddr": "10.0.0.1", 00:14:52.726 "trsvcid": "40838" 00:14:52.726 }, 00:14:52.726 "auth": { 00:14:52.726 "state": "completed", 00:14:52.726 "digest": "sha384", 00:14:52.726 "dhgroup": "null" 00:14:52.726 } 00:14:52.726 } 00:14:52.726 ]' 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.726 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.985 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:52.985 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.985 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.985 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.986 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.245 21:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.182 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.441 00:14:54.441 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.441 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.441 21:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.700 { 00:14:54.700 "cntlid": 57, 00:14:54.700 "qid": 0, 00:14:54.700 "state": "enabled", 00:14:54.700 "thread": "nvmf_tgt_poll_group_000", 00:14:54.700 "listen_address": { 00:14:54.700 "trtype": "TCP", 00:14:54.700 "adrfam": "IPv4", 00:14:54.700 "traddr": "10.0.0.2", 00:14:54.700 "trsvcid": "4420" 00:14:54.700 }, 00:14:54.700 "peer_address": { 00:14:54.700 "trtype": "TCP", 00:14:54.700 "adrfam": "IPv4", 00:14:54.700 "traddr": "10.0.0.1", 00:14:54.700 "trsvcid": "40866" 00:14:54.700 }, 00:14:54.700 "auth": { 00:14:54.700 "state": "completed", 00:14:54.700 "digest": "sha384", 00:14:54.700 "dhgroup": "ffdhe2048" 00:14:54.700 } 00:14:54.700 } 00:14:54.700 ]' 00:14:54.700 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.960 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.218 21:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:14:55.785 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.785 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:55.785 21:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.785 21:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.043 21:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.043 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.043 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.043 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.044 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.611 00:14:56.611 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.611 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.611 21:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.871 { 00:14:56.871 "cntlid": 59, 00:14:56.871 "qid": 0, 00:14:56.871 "state": "enabled", 00:14:56.871 "thread": "nvmf_tgt_poll_group_000", 00:14:56.871 "listen_address": { 00:14:56.871 "trtype": "TCP", 00:14:56.871 "adrfam": "IPv4", 00:14:56.871 "traddr": "10.0.0.2", 00:14:56.871 "trsvcid": "4420" 00:14:56.871 }, 00:14:56.871 "peer_address": { 00:14:56.871 "trtype": "TCP", 00:14:56.871 "adrfam": "IPv4", 00:14:56.871 "traddr": "10.0.0.1", 00:14:56.871 "trsvcid": "40892" 00:14:56.871 }, 00:14:56.871 "auth": { 00:14:56.871 "state": "completed", 00:14:56.871 "digest": "sha384", 00:14:56.871 "dhgroup": "ffdhe2048" 00:14:56.871 } 00:14:56.871 } 00:14:56.871 ]' 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.871 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.130 21:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.066 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.635 00:14:58.635 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.635 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.635 21:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.635 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.635 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.635 21:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.635 21:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.894 { 00:14:58.894 "cntlid": 61, 00:14:58.894 "qid": 0, 00:14:58.894 "state": "enabled", 00:14:58.894 "thread": "nvmf_tgt_poll_group_000", 00:14:58.894 "listen_address": { 00:14:58.894 "trtype": "TCP", 00:14:58.894 "adrfam": "IPv4", 00:14:58.894 "traddr": "10.0.0.2", 00:14:58.894 "trsvcid": "4420" 00:14:58.894 }, 00:14:58.894 "peer_address": { 00:14:58.894 "trtype": "TCP", 00:14:58.894 "adrfam": "IPv4", 00:14:58.894 "traddr": "10.0.0.1", 00:14:58.894 "trsvcid": "40922" 00:14:58.894 }, 00:14:58.894 "auth": { 00:14:58.894 "state": "completed", 00:14:58.894 "digest": "sha384", 00:14:58.894 "dhgroup": "ffdhe2048" 00:14:58.894 } 00:14:58.894 } 00:14:58.894 ]' 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.894 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.153 21:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:00.089 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:00.348 00:15:00.348 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.348 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.348 21:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.606 { 00:15:00.606 "cntlid": 63, 00:15:00.606 "qid": 0, 00:15:00.606 "state": "enabled", 00:15:00.606 "thread": "nvmf_tgt_poll_group_000", 00:15:00.606 "listen_address": { 00:15:00.606 "trtype": "TCP", 00:15:00.606 "adrfam": "IPv4", 00:15:00.606 "traddr": "10.0.0.2", 00:15:00.606 "trsvcid": "4420" 00:15:00.606 }, 00:15:00.606 "peer_address": { 00:15:00.606 "trtype": "TCP", 00:15:00.606 "adrfam": "IPv4", 00:15:00.606 "traddr": "10.0.0.1", 00:15:00.606 "trsvcid": "46348" 00:15:00.606 }, 00:15:00.606 "auth": { 00:15:00.606 "state": "completed", 00:15:00.606 "digest": "sha384", 00:15:00.606 "dhgroup": "ffdhe2048" 00:15:00.606 } 00:15:00.606 } 00:15:00.606 ]' 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.606 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.865 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.865 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.865 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.124 21:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.692 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.950 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.209 00:15:02.209 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.209 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.209 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.467 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.467 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.467 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.467 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.467 21:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.467 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.467 { 00:15:02.467 "cntlid": 65, 00:15:02.467 "qid": 0, 00:15:02.467 "state": "enabled", 00:15:02.467 "thread": "nvmf_tgt_poll_group_000", 00:15:02.467 "listen_address": { 00:15:02.467 "trtype": "TCP", 00:15:02.467 "adrfam": "IPv4", 00:15:02.467 "traddr": "10.0.0.2", 00:15:02.467 "trsvcid": "4420" 00:15:02.467 }, 00:15:02.467 "peer_address": { 00:15:02.467 "trtype": "TCP", 00:15:02.467 "adrfam": "IPv4", 00:15:02.467 "traddr": "10.0.0.1", 00:15:02.467 "trsvcid": "46366" 00:15:02.467 }, 00:15:02.468 "auth": { 00:15:02.468 "state": "completed", 00:15:02.468 "digest": "sha384", 00:15:02.468 "dhgroup": "ffdhe3072" 00:15:02.468 } 00:15:02.468 } 00:15:02.468 ]' 00:15:02.468 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.468 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.468 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.468 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.468 21:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.727 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.727 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.727 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.727 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.664 21:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.923 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.182 00:15:04.182 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.182 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.182 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.441 { 00:15:04.441 "cntlid": 67, 00:15:04.441 "qid": 0, 00:15:04.441 "state": "enabled", 00:15:04.441 "thread": "nvmf_tgt_poll_group_000", 00:15:04.441 "listen_address": { 00:15:04.441 "trtype": "TCP", 00:15:04.441 "adrfam": "IPv4", 00:15:04.441 "traddr": "10.0.0.2", 00:15:04.441 "trsvcid": "4420" 00:15:04.441 }, 00:15:04.441 "peer_address": { 00:15:04.441 "trtype": "TCP", 00:15:04.441 "adrfam": "IPv4", 00:15:04.441 "traddr": "10.0.0.1", 00:15:04.441 "trsvcid": "46382" 00:15:04.441 }, 00:15:04.441 "auth": { 00:15:04.441 "state": "completed", 00:15:04.441 "digest": "sha384", 00:15:04.441 "dhgroup": "ffdhe3072" 00:15:04.441 } 00:15:04.441 } 00:15:04.441 ]' 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.441 21:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.700 21:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.700 21:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.700 21:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.700 21:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.700 21:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.965 21:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:05.542 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.542 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:05.542 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.542 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.542 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.543 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.543 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:05.543 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.801 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.366 00:15:06.366 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.366 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.366 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.624 { 00:15:06.624 "cntlid": 69, 00:15:06.624 "qid": 0, 00:15:06.624 "state": "enabled", 00:15:06.624 "thread": "nvmf_tgt_poll_group_000", 00:15:06.624 "listen_address": { 00:15:06.624 "trtype": "TCP", 00:15:06.624 "adrfam": "IPv4", 00:15:06.624 "traddr": "10.0.0.2", 00:15:06.624 "trsvcid": "4420" 00:15:06.624 }, 00:15:06.624 "peer_address": { 00:15:06.624 "trtype": "TCP", 00:15:06.624 "adrfam": "IPv4", 00:15:06.624 "traddr": "10.0.0.1", 00:15:06.624 "trsvcid": "46414" 00:15:06.624 }, 00:15:06.624 "auth": { 00:15:06.624 "state": "completed", 00:15:06.624 "digest": "sha384", 00:15:06.624 "dhgroup": "ffdhe3072" 00:15:06.624 } 00:15:06.624 } 00:15:06.624 ]' 00:15:06.624 21:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.624 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.883 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:07.450 21:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.709 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.968 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.227 00:15:08.227 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.227 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.227 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.485 { 00:15:08.485 "cntlid": 71, 00:15:08.485 "qid": 0, 00:15:08.485 "state": "enabled", 00:15:08.485 "thread": "nvmf_tgt_poll_group_000", 00:15:08.485 "listen_address": { 00:15:08.485 "trtype": "TCP", 00:15:08.485 "adrfam": "IPv4", 00:15:08.485 "traddr": "10.0.0.2", 00:15:08.485 "trsvcid": "4420" 00:15:08.485 }, 00:15:08.485 "peer_address": { 00:15:08.485 "trtype": "TCP", 00:15:08.485 "adrfam": "IPv4", 00:15:08.485 "traddr": "10.0.0.1", 00:15:08.485 "trsvcid": "46438" 00:15:08.485 }, 00:15:08.485 "auth": { 00:15:08.485 "state": "completed", 00:15:08.485 "digest": "sha384", 00:15:08.485 "dhgroup": "ffdhe3072" 00:15:08.485 } 00:15:08.485 } 00:15:08.485 ]' 00:15:08.485 21:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.485 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.485 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.744 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.744 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.744 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.744 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.744 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.003 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:09.570 21:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:09.570 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.828 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.086 00:15:10.343 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.343 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.343 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.602 { 00:15:10.602 "cntlid": 73, 00:15:10.602 "qid": 0, 00:15:10.602 "state": "enabled", 00:15:10.602 "thread": "nvmf_tgt_poll_group_000", 00:15:10.602 "listen_address": { 00:15:10.602 "trtype": "TCP", 00:15:10.602 "adrfam": "IPv4", 00:15:10.602 "traddr": "10.0.0.2", 00:15:10.602 "trsvcid": "4420" 00:15:10.602 }, 00:15:10.602 "peer_address": { 00:15:10.602 "trtype": "TCP", 00:15:10.602 "adrfam": "IPv4", 00:15:10.602 "traddr": "10.0.0.1", 00:15:10.602 "trsvcid": "40550" 00:15:10.602 }, 00:15:10.602 "auth": { 00:15:10.602 "state": "completed", 00:15:10.602 "digest": "sha384", 00:15:10.602 "dhgroup": "ffdhe4096" 00:15:10.602 } 00:15:10.602 } 00:15:10.602 ]' 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.602 21:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.602 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.602 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.602 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.602 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.602 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.861 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:11.795 21:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.795 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.052 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.311 { 00:15:12.311 "cntlid": 75, 00:15:12.311 "qid": 0, 00:15:12.311 "state": "enabled", 00:15:12.311 "thread": "nvmf_tgt_poll_group_000", 00:15:12.311 "listen_address": { 00:15:12.311 "trtype": "TCP", 00:15:12.311 "adrfam": "IPv4", 00:15:12.311 "traddr": "10.0.0.2", 00:15:12.311 "trsvcid": "4420" 00:15:12.311 }, 00:15:12.311 "peer_address": { 00:15:12.311 "trtype": "TCP", 00:15:12.311 "adrfam": "IPv4", 00:15:12.311 "traddr": "10.0.0.1", 00:15:12.311 "trsvcid": "40572" 00:15:12.311 }, 00:15:12.311 "auth": { 00:15:12.311 "state": "completed", 00:15:12.311 "digest": "sha384", 00:15:12.311 "dhgroup": "ffdhe4096" 00:15:12.311 } 00:15:12.311 } 00:15:12.311 ]' 00:15:12.311 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.571 21:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.830 21:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:13.397 21:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.398 21:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.656 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.222 00:15:14.222 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.222 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.222 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.482 { 00:15:14.482 "cntlid": 77, 00:15:14.482 "qid": 0, 00:15:14.482 "state": "enabled", 00:15:14.482 "thread": "nvmf_tgt_poll_group_000", 00:15:14.482 "listen_address": { 00:15:14.482 "trtype": "TCP", 00:15:14.482 "adrfam": "IPv4", 00:15:14.482 "traddr": "10.0.0.2", 00:15:14.482 "trsvcid": "4420" 00:15:14.482 }, 00:15:14.482 "peer_address": { 00:15:14.482 "trtype": "TCP", 00:15:14.482 "adrfam": "IPv4", 00:15:14.482 "traddr": "10.0.0.1", 00:15:14.482 "trsvcid": "40596" 00:15:14.482 }, 00:15:14.482 "auth": { 00:15:14.482 "state": "completed", 00:15:14.482 "digest": "sha384", 00:15:14.482 "dhgroup": "ffdhe4096" 00:15:14.482 } 00:15:14.482 } 00:15:14.482 ]' 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.482 21:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.741 21:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:15.678 21:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.678 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.246 00:15:16.246 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.246 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.246 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.505 { 00:15:16.505 "cntlid": 79, 00:15:16.505 "qid": 0, 00:15:16.505 "state": "enabled", 00:15:16.505 "thread": "nvmf_tgt_poll_group_000", 00:15:16.505 "listen_address": { 00:15:16.505 "trtype": "TCP", 00:15:16.505 "adrfam": "IPv4", 00:15:16.505 "traddr": "10.0.0.2", 00:15:16.505 "trsvcid": "4420" 00:15:16.505 }, 00:15:16.505 "peer_address": { 00:15:16.505 "trtype": "TCP", 00:15:16.505 "adrfam": "IPv4", 00:15:16.505 "traddr": "10.0.0.1", 00:15:16.505 "trsvcid": "40614" 00:15:16.505 }, 00:15:16.505 "auth": { 00:15:16.505 "state": "completed", 00:15:16.505 "digest": "sha384", 00:15:16.505 "dhgroup": "ffdhe4096" 00:15:16.505 } 00:15:16.505 } 00:15:16.505 ]' 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.505 21:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.764 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:17.701 21:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.701 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.269 00:15:18.269 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.269 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.269 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.528 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.528 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.528 21:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.528 21:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.528 21:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.528 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.528 { 00:15:18.528 "cntlid": 81, 00:15:18.528 "qid": 0, 00:15:18.528 "state": "enabled", 00:15:18.528 "thread": "nvmf_tgt_poll_group_000", 00:15:18.528 "listen_address": { 00:15:18.528 "trtype": "TCP", 00:15:18.528 "adrfam": "IPv4", 00:15:18.528 "traddr": "10.0.0.2", 00:15:18.528 "trsvcid": "4420" 00:15:18.528 }, 00:15:18.528 "peer_address": { 00:15:18.528 "trtype": "TCP", 00:15:18.528 "adrfam": "IPv4", 00:15:18.528 "traddr": "10.0.0.1", 00:15:18.528 "trsvcid": "40640" 00:15:18.528 }, 00:15:18.528 "auth": { 00:15:18.528 "state": "completed", 00:15:18.528 "digest": "sha384", 00:15:18.528 "dhgroup": "ffdhe6144" 00:15:18.528 } 00:15:18.528 } 00:15:18.528 ]' 00:15:18.529 21:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.529 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.529 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.529 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.529 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.788 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.788 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.788 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.047 21:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:19.615 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:19.616 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.875 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.444 00:15:20.444 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.444 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.444 21:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.703 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.703 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.703 21:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.703 21:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.703 21:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.703 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.703 { 00:15:20.703 "cntlid": 83, 00:15:20.703 "qid": 0, 00:15:20.703 "state": "enabled", 00:15:20.703 "thread": "nvmf_tgt_poll_group_000", 00:15:20.703 "listen_address": { 00:15:20.703 "trtype": "TCP", 00:15:20.703 "adrfam": "IPv4", 00:15:20.703 "traddr": "10.0.0.2", 00:15:20.703 "trsvcid": "4420" 00:15:20.703 }, 00:15:20.703 "peer_address": { 00:15:20.703 "trtype": "TCP", 00:15:20.703 "adrfam": "IPv4", 00:15:20.703 "traddr": "10.0.0.1", 00:15:20.703 "trsvcid": "37032" 00:15:20.704 }, 00:15:20.704 "auth": { 00:15:20.704 "state": "completed", 00:15:20.704 "digest": "sha384", 00:15:20.704 "dhgroup": "ffdhe6144" 00:15:20.704 } 00:15:20.704 } 00:15:20.704 ]' 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.704 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.963 21:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.937 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.938 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.938 21:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.938 21:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.938 21:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.938 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.938 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.504 00:15:22.504 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.504 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.504 21:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.762 { 00:15:22.762 "cntlid": 85, 00:15:22.762 "qid": 0, 00:15:22.762 "state": "enabled", 00:15:22.762 "thread": "nvmf_tgt_poll_group_000", 00:15:22.762 "listen_address": { 00:15:22.762 "trtype": "TCP", 00:15:22.762 "adrfam": "IPv4", 00:15:22.762 "traddr": "10.0.0.2", 00:15:22.762 "trsvcid": "4420" 00:15:22.762 }, 00:15:22.762 "peer_address": { 00:15:22.762 "trtype": "TCP", 00:15:22.762 "adrfam": "IPv4", 00:15:22.762 "traddr": "10.0.0.1", 00:15:22.762 "trsvcid": "37054" 00:15:22.762 }, 00:15:22.762 "auth": { 00:15:22.762 "state": "completed", 00:15:22.762 "digest": "sha384", 00:15:22.762 "dhgroup": "ffdhe6144" 00:15:22.762 } 00:15:22.762 } 00:15:22.762 ]' 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.762 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.020 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:23.020 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.020 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.020 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.020 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.279 21:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:23.847 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.106 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.365 00:15:24.624 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.624 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.624 21:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.883 { 00:15:24.883 "cntlid": 87, 00:15:24.883 "qid": 0, 00:15:24.883 "state": "enabled", 00:15:24.883 "thread": "nvmf_tgt_poll_group_000", 00:15:24.883 "listen_address": { 00:15:24.883 "trtype": "TCP", 00:15:24.883 "adrfam": "IPv4", 00:15:24.883 "traddr": "10.0.0.2", 00:15:24.883 "trsvcid": "4420" 00:15:24.883 }, 00:15:24.883 "peer_address": { 00:15:24.883 "trtype": "TCP", 00:15:24.883 "adrfam": "IPv4", 00:15:24.883 "traddr": "10.0.0.1", 00:15:24.883 "trsvcid": "37076" 00:15:24.883 }, 00:15:24.883 "auth": { 00:15:24.883 "state": "completed", 00:15:24.883 "digest": "sha384", 00:15:24.883 "dhgroup": "ffdhe6144" 00:15:24.883 } 00:15:24.883 } 00:15:24.883 ]' 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.883 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.143 21:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:26.078 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.079 21:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.646 00:15:26.646 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.646 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.646 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.905 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.905 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.905 21:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.905 21:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.905 21:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.162 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.162 { 00:15:27.162 "cntlid": 89, 00:15:27.162 "qid": 0, 00:15:27.162 "state": "enabled", 00:15:27.162 "thread": "nvmf_tgt_poll_group_000", 00:15:27.162 "listen_address": { 00:15:27.162 "trtype": "TCP", 00:15:27.162 "adrfam": "IPv4", 00:15:27.162 "traddr": "10.0.0.2", 00:15:27.162 "trsvcid": "4420" 00:15:27.162 }, 00:15:27.162 "peer_address": { 00:15:27.162 "trtype": "TCP", 00:15:27.162 "adrfam": "IPv4", 00:15:27.162 "traddr": "10.0.0.1", 00:15:27.162 "trsvcid": "37102" 00:15:27.162 }, 00:15:27.162 "auth": { 00:15:27.162 "state": "completed", 00:15:27.162 "digest": "sha384", 00:15:27.162 "dhgroup": "ffdhe8192" 00:15:27.162 } 00:15:27.162 } 00:15:27.162 ]' 00:15:27.162 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.162 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.162 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.163 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.163 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.163 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.163 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.163 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.420 21:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:27.986 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.245 21:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.810 00:15:29.068 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.068 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.068 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.328 { 00:15:29.328 "cntlid": 91, 00:15:29.328 "qid": 0, 00:15:29.328 "state": "enabled", 00:15:29.328 "thread": "nvmf_tgt_poll_group_000", 00:15:29.328 "listen_address": { 00:15:29.328 "trtype": "TCP", 00:15:29.328 "adrfam": "IPv4", 00:15:29.328 "traddr": "10.0.0.2", 00:15:29.328 "trsvcid": "4420" 00:15:29.328 }, 00:15:29.328 "peer_address": { 00:15:29.328 "trtype": "TCP", 00:15:29.328 "adrfam": "IPv4", 00:15:29.328 "traddr": "10.0.0.1", 00:15:29.328 "trsvcid": "37114" 00:15:29.328 }, 00:15:29.328 "auth": { 00:15:29.328 "state": "completed", 00:15:29.328 "digest": "sha384", 00:15:29.328 "dhgroup": "ffdhe8192" 00:15:29.328 } 00:15:29.328 } 00:15:29.328 ]' 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.328 21:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.588 21:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:30.523 21:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.523 21:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.782 21:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.782 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.782 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.350 00:15:31.350 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.350 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.350 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.609 { 00:15:31.609 "cntlid": 93, 00:15:31.609 "qid": 0, 00:15:31.609 "state": "enabled", 00:15:31.609 "thread": "nvmf_tgt_poll_group_000", 00:15:31.609 "listen_address": { 00:15:31.609 "trtype": "TCP", 00:15:31.609 "adrfam": "IPv4", 00:15:31.609 "traddr": "10.0.0.2", 00:15:31.609 "trsvcid": "4420" 00:15:31.609 }, 00:15:31.609 "peer_address": { 00:15:31.609 "trtype": "TCP", 00:15:31.609 "adrfam": "IPv4", 00:15:31.609 "traddr": "10.0.0.1", 00:15:31.609 "trsvcid": "39832" 00:15:31.609 }, 00:15:31.609 "auth": { 00:15:31.609 "state": "completed", 00:15:31.609 "digest": "sha384", 00:15:31.609 "dhgroup": "ffdhe8192" 00:15:31.609 } 00:15:31.609 } 00:15:31.609 ]' 00:15:31.609 21:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.609 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.174 21:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.742 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.001 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:33.002 21:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.002 21:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.002 21:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.002 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.002 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.569 00:15:33.569 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.569 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.569 21:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.828 { 00:15:33.828 "cntlid": 95, 00:15:33.828 "qid": 0, 00:15:33.828 "state": "enabled", 00:15:33.828 "thread": "nvmf_tgt_poll_group_000", 00:15:33.828 "listen_address": { 00:15:33.828 "trtype": "TCP", 00:15:33.828 "adrfam": "IPv4", 00:15:33.828 "traddr": "10.0.0.2", 00:15:33.828 "trsvcid": "4420" 00:15:33.828 }, 00:15:33.828 "peer_address": { 00:15:33.828 "trtype": "TCP", 00:15:33.828 "adrfam": "IPv4", 00:15:33.828 "traddr": "10.0.0.1", 00:15:33.828 "trsvcid": "39848" 00:15:33.828 }, 00:15:33.828 "auth": { 00:15:33.828 "state": "completed", 00:15:33.828 "digest": "sha384", 00:15:33.828 "dhgroup": "ffdhe8192" 00:15:33.828 } 00:15:33.828 } 00:15:33.828 ]' 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.828 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.087 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.087 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.087 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.087 21:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.022 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.280 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:35.280 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.280 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:35.280 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.280 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.281 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.539 00:15:35.539 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.539 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.539 21:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.799 { 00:15:35.799 "cntlid": 97, 00:15:35.799 "qid": 0, 00:15:35.799 "state": "enabled", 00:15:35.799 "thread": "nvmf_tgt_poll_group_000", 00:15:35.799 "listen_address": { 00:15:35.799 "trtype": "TCP", 00:15:35.799 "adrfam": "IPv4", 00:15:35.799 "traddr": "10.0.0.2", 00:15:35.799 "trsvcid": "4420" 00:15:35.799 }, 00:15:35.799 "peer_address": { 00:15:35.799 "trtype": "TCP", 00:15:35.799 "adrfam": "IPv4", 00:15:35.799 "traddr": "10.0.0.1", 00:15:35.799 "trsvcid": "39876" 00:15:35.799 }, 00:15:35.799 "auth": { 00:15:35.799 "state": "completed", 00:15:35.799 "digest": "sha512", 00:15:35.799 "dhgroup": "null" 00:15:35.799 } 00:15:35.799 } 00:15:35.799 ]' 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:35.799 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.058 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.058 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.058 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.317 21:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:36.885 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.885 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:36.885 21:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.886 21:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.886 21:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.886 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.886 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:36.886 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.144 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.405 00:15:37.405 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.405 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.405 21:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.685 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.686 { 00:15:37.686 "cntlid": 99, 00:15:37.686 "qid": 0, 00:15:37.686 "state": "enabled", 00:15:37.686 "thread": "nvmf_tgt_poll_group_000", 00:15:37.686 "listen_address": { 00:15:37.686 "trtype": "TCP", 00:15:37.686 "adrfam": "IPv4", 00:15:37.686 "traddr": "10.0.0.2", 00:15:37.686 "trsvcid": "4420" 00:15:37.686 }, 00:15:37.686 "peer_address": { 00:15:37.686 "trtype": "TCP", 00:15:37.686 "adrfam": "IPv4", 00:15:37.686 "traddr": "10.0.0.1", 00:15:37.686 "trsvcid": "39896" 00:15:37.686 }, 00:15:37.686 "auth": { 00:15:37.686 "state": "completed", 00:15:37.686 "digest": "sha512", 00:15:37.686 "dhgroup": "null" 00:15:37.686 } 00:15:37.686 } 00:15:37.686 ]' 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.686 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.948 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.948 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.948 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.948 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.948 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.206 21:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.771 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.030 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.289 00:15:39.289 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.289 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.289 21:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.548 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.548 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.548 21:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.548 21:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.807 21:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.807 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.807 { 00:15:39.807 "cntlid": 101, 00:15:39.807 "qid": 0, 00:15:39.807 "state": "enabled", 00:15:39.807 "thread": "nvmf_tgt_poll_group_000", 00:15:39.807 "listen_address": { 00:15:39.807 "trtype": "TCP", 00:15:39.807 "adrfam": "IPv4", 00:15:39.807 "traddr": "10.0.0.2", 00:15:39.807 "trsvcid": "4420" 00:15:39.807 }, 00:15:39.807 "peer_address": { 00:15:39.807 "trtype": "TCP", 00:15:39.807 "adrfam": "IPv4", 00:15:39.807 "traddr": "10.0.0.1", 00:15:39.807 "trsvcid": "39918" 00:15:39.807 }, 00:15:39.807 "auth": { 00:15:39.807 "state": "completed", 00:15:39.807 "digest": "sha512", 00:15:39.807 "dhgroup": "null" 00:15:39.807 } 00:15:39.807 } 00:15:39.807 ]' 00:15:39.807 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.807 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.807 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.808 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.808 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.808 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.808 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.808 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.066 21:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:40.633 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.892 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.152 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.412 00:15:41.412 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.412 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.412 21:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.671 { 00:15:41.671 "cntlid": 103, 00:15:41.671 "qid": 0, 00:15:41.671 "state": "enabled", 00:15:41.671 "thread": "nvmf_tgt_poll_group_000", 00:15:41.671 "listen_address": { 00:15:41.671 "trtype": "TCP", 00:15:41.671 "adrfam": "IPv4", 00:15:41.671 "traddr": "10.0.0.2", 00:15:41.671 "trsvcid": "4420" 00:15:41.671 }, 00:15:41.671 "peer_address": { 00:15:41.671 "trtype": "TCP", 00:15:41.671 "adrfam": "IPv4", 00:15:41.671 "traddr": "10.0.0.1", 00:15:41.671 "trsvcid": "40098" 00:15:41.671 }, 00:15:41.671 "auth": { 00:15:41.671 "state": "completed", 00:15:41.671 "digest": "sha512", 00:15:41.671 "dhgroup": "null" 00:15:41.671 } 00:15:41.671 } 00:15:41.671 ]' 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.671 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.929 21:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.497 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.065 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.065 00:15:43.325 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.325 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.325 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.585 { 00:15:43.585 "cntlid": 105, 00:15:43.585 "qid": 0, 00:15:43.585 "state": "enabled", 00:15:43.585 "thread": "nvmf_tgt_poll_group_000", 00:15:43.585 "listen_address": { 00:15:43.585 "trtype": "TCP", 00:15:43.585 "adrfam": "IPv4", 00:15:43.585 "traddr": "10.0.0.2", 00:15:43.585 "trsvcid": "4420" 00:15:43.585 }, 00:15:43.585 "peer_address": { 00:15:43.585 "trtype": "TCP", 00:15:43.585 "adrfam": "IPv4", 00:15:43.585 "traddr": "10.0.0.1", 00:15:43.585 "trsvcid": "40134" 00:15:43.585 }, 00:15:43.585 "auth": { 00:15:43.585 "state": "completed", 00:15:43.585 "digest": "sha512", 00:15:43.585 "dhgroup": "ffdhe2048" 00:15:43.585 } 00:15:43.585 } 00:15:43.585 ]' 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.585 21:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.585 21:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.585 21:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.585 21:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.585 21:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.845 21:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.784 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.352 00:15:45.352 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.352 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.352 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.611 { 00:15:45.611 "cntlid": 107, 00:15:45.611 "qid": 0, 00:15:45.611 "state": "enabled", 00:15:45.611 "thread": "nvmf_tgt_poll_group_000", 00:15:45.611 "listen_address": { 00:15:45.611 "trtype": "TCP", 00:15:45.611 "adrfam": "IPv4", 00:15:45.611 "traddr": "10.0.0.2", 00:15:45.611 "trsvcid": "4420" 00:15:45.611 }, 00:15:45.611 "peer_address": { 00:15:45.611 "trtype": "TCP", 00:15:45.611 "adrfam": "IPv4", 00:15:45.611 "traddr": "10.0.0.1", 00:15:45.611 "trsvcid": "40148" 00:15:45.611 }, 00:15:45.611 "auth": { 00:15:45.611 "state": "completed", 00:15:45.611 "digest": "sha512", 00:15:45.611 "dhgroup": "ffdhe2048" 00:15:45.611 } 00:15:45.611 } 00:15:45.611 ]' 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.611 21:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.611 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.611 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.611 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.611 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.611 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.869 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:46.437 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.437 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:46.437 21:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.437 21:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.696 21:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.696 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.696 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.696 21:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.696 21:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 21:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.955 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.955 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.214 00:15:47.214 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.214 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.214 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.473 { 00:15:47.473 "cntlid": 109, 00:15:47.473 "qid": 0, 00:15:47.473 "state": "enabled", 00:15:47.473 "thread": "nvmf_tgt_poll_group_000", 00:15:47.473 "listen_address": { 00:15:47.473 "trtype": "TCP", 00:15:47.473 "adrfam": "IPv4", 00:15:47.473 "traddr": "10.0.0.2", 00:15:47.473 "trsvcid": "4420" 00:15:47.473 }, 00:15:47.473 "peer_address": { 00:15:47.473 "trtype": "TCP", 00:15:47.473 "adrfam": "IPv4", 00:15:47.473 "traddr": "10.0.0.1", 00:15:47.473 "trsvcid": "40176" 00:15:47.473 }, 00:15:47.473 "auth": { 00:15:47.473 "state": "completed", 00:15:47.473 "digest": "sha512", 00:15:47.473 "dhgroup": "ffdhe2048" 00:15:47.473 } 00:15:47.473 } 00:15:47.473 ]' 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.473 21:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.731 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.731 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.731 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.731 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:48.667 21:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.927 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.186 00:15:49.186 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.186 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.186 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.445 { 00:15:49.445 "cntlid": 111, 00:15:49.445 "qid": 0, 00:15:49.445 "state": "enabled", 00:15:49.445 "thread": "nvmf_tgt_poll_group_000", 00:15:49.445 "listen_address": { 00:15:49.445 "trtype": "TCP", 00:15:49.445 "adrfam": "IPv4", 00:15:49.445 "traddr": "10.0.0.2", 00:15:49.445 "trsvcid": "4420" 00:15:49.445 }, 00:15:49.445 "peer_address": { 00:15:49.445 "trtype": "TCP", 00:15:49.445 "adrfam": "IPv4", 00:15:49.445 "traddr": "10.0.0.1", 00:15:49.445 "trsvcid": "40208" 00:15:49.445 }, 00:15:49.445 "auth": { 00:15:49.445 "state": "completed", 00:15:49.445 "digest": "sha512", 00:15:49.445 "dhgroup": "ffdhe2048" 00:15:49.445 } 00:15:49.445 } 00:15:49.445 ]' 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.445 21:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.704 21:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.704 21:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.704 21:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.962 21:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:50.529 21:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:50.529 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.788 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.355 00:15:51.355 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.355 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.355 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.613 { 00:15:51.613 "cntlid": 113, 00:15:51.613 "qid": 0, 00:15:51.613 "state": "enabled", 00:15:51.613 "thread": "nvmf_tgt_poll_group_000", 00:15:51.613 "listen_address": { 00:15:51.613 "trtype": "TCP", 00:15:51.613 "adrfam": "IPv4", 00:15:51.613 "traddr": "10.0.0.2", 00:15:51.613 "trsvcid": "4420" 00:15:51.613 }, 00:15:51.613 "peer_address": { 00:15:51.613 "trtype": "TCP", 00:15:51.613 "adrfam": "IPv4", 00:15:51.613 "traddr": "10.0.0.1", 00:15:51.613 "trsvcid": "41702" 00:15:51.613 }, 00:15:51.613 "auth": { 00:15:51.613 "state": "completed", 00:15:51.613 "digest": "sha512", 00:15:51.613 "dhgroup": "ffdhe3072" 00:15:51.613 } 00:15:51.613 } 00:15:51.613 ]' 00:15:51.613 21:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.613 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.178 21:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:52.743 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.002 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.260 00:15:53.260 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.260 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.260 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.518 { 00:15:53.518 "cntlid": 115, 00:15:53.518 "qid": 0, 00:15:53.518 "state": "enabled", 00:15:53.518 "thread": "nvmf_tgt_poll_group_000", 00:15:53.518 "listen_address": { 00:15:53.518 "trtype": "TCP", 00:15:53.518 "adrfam": "IPv4", 00:15:53.518 "traddr": "10.0.0.2", 00:15:53.518 "trsvcid": "4420" 00:15:53.518 }, 00:15:53.518 "peer_address": { 00:15:53.518 "trtype": "TCP", 00:15:53.518 "adrfam": "IPv4", 00:15:53.518 "traddr": "10.0.0.1", 00:15:53.518 "trsvcid": "41732" 00:15:53.518 }, 00:15:53.518 "auth": { 00:15:53.518 "state": "completed", 00:15:53.518 "digest": "sha512", 00:15:53.518 "dhgroup": "ffdhe3072" 00:15:53.518 } 00:15:53.518 } 00:15:53.518 ]' 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.518 21:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.518 21:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.518 21:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.518 21:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.088 21:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:54.657 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.915 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.916 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.916 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.916 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.174 00:15:55.174 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.174 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.174 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.433 { 00:15:55.433 "cntlid": 117, 00:15:55.433 "qid": 0, 00:15:55.433 "state": "enabled", 00:15:55.433 "thread": "nvmf_tgt_poll_group_000", 00:15:55.433 "listen_address": { 00:15:55.433 "trtype": "TCP", 00:15:55.433 "adrfam": "IPv4", 00:15:55.433 "traddr": "10.0.0.2", 00:15:55.433 "trsvcid": "4420" 00:15:55.433 }, 00:15:55.433 "peer_address": { 00:15:55.433 "trtype": "TCP", 00:15:55.433 "adrfam": "IPv4", 00:15:55.433 "traddr": "10.0.0.1", 00:15:55.433 "trsvcid": "41770" 00:15:55.433 }, 00:15:55.433 "auth": { 00:15:55.433 "state": "completed", 00:15:55.433 "digest": "sha512", 00:15:55.433 "dhgroup": "ffdhe3072" 00:15:55.433 } 00:15:55.433 } 00:15:55.433 ]' 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.433 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.692 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.692 21:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.692 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.692 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.692 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.950 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:15:56.516 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:56.517 21:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.776 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.035 00:15:57.035 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.035 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.035 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.293 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.293 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.293 21:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.294 { 00:15:57.294 "cntlid": 119, 00:15:57.294 "qid": 0, 00:15:57.294 "state": "enabled", 00:15:57.294 "thread": "nvmf_tgt_poll_group_000", 00:15:57.294 "listen_address": { 00:15:57.294 "trtype": "TCP", 00:15:57.294 "adrfam": "IPv4", 00:15:57.294 "traddr": "10.0.0.2", 00:15:57.294 "trsvcid": "4420" 00:15:57.294 }, 00:15:57.294 "peer_address": { 00:15:57.294 "trtype": "TCP", 00:15:57.294 "adrfam": "IPv4", 00:15:57.294 "traddr": "10.0.0.1", 00:15:57.294 "trsvcid": "41788" 00:15:57.294 }, 00:15:57.294 "auth": { 00:15:57.294 "state": "completed", 00:15:57.294 "digest": "sha512", 00:15:57.294 "dhgroup": "ffdhe3072" 00:15:57.294 } 00:15:57.294 } 00:15:57.294 ]' 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.294 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.553 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.553 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.553 21:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.811 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.379 21:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.637 21:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.638 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.638 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.204 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.204 { 00:15:59.204 "cntlid": 121, 00:15:59.204 "qid": 0, 00:15:59.204 "state": "enabled", 00:15:59.204 "thread": "nvmf_tgt_poll_group_000", 00:15:59.204 "listen_address": { 00:15:59.204 "trtype": "TCP", 00:15:59.204 "adrfam": "IPv4", 00:15:59.204 "traddr": "10.0.0.2", 00:15:59.204 "trsvcid": "4420" 00:15:59.204 }, 00:15:59.204 "peer_address": { 00:15:59.204 "trtype": "TCP", 00:15:59.204 "adrfam": "IPv4", 00:15:59.204 "traddr": "10.0.0.1", 00:15:59.204 "trsvcid": "41822" 00:15:59.204 }, 00:15:59.204 "auth": { 00:15:59.204 "state": "completed", 00:15:59.204 "digest": "sha512", 00:15:59.204 "dhgroup": "ffdhe4096" 00:15:59.204 } 00:15:59.204 } 00:15:59.204 ]' 00:15:59.204 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.463 21:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.722 21:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.287 21:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.545 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.112 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.112 21:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.371 { 00:16:01.371 "cntlid": 123, 00:16:01.371 "qid": 0, 00:16:01.371 "state": "enabled", 00:16:01.371 "thread": "nvmf_tgt_poll_group_000", 00:16:01.371 "listen_address": { 00:16:01.371 "trtype": "TCP", 00:16:01.371 "adrfam": "IPv4", 00:16:01.371 "traddr": "10.0.0.2", 00:16:01.371 "trsvcid": "4420" 00:16:01.371 }, 00:16:01.371 "peer_address": { 00:16:01.371 "trtype": "TCP", 00:16:01.371 "adrfam": "IPv4", 00:16:01.371 "traddr": "10.0.0.1", 00:16:01.371 "trsvcid": "36522" 00:16:01.371 }, 00:16:01.371 "auth": { 00:16:01.371 "state": "completed", 00:16:01.371 "digest": "sha512", 00:16:01.371 "dhgroup": "ffdhe4096" 00:16:01.371 } 00:16:01.371 } 00:16:01.371 ]' 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.371 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.372 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.372 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.372 21:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.631 21:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:02.196 21:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.766 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.025 00:16:03.025 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.025 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.025 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.284 { 00:16:03.284 "cntlid": 125, 00:16:03.284 "qid": 0, 00:16:03.284 "state": "enabled", 00:16:03.284 "thread": "nvmf_tgt_poll_group_000", 00:16:03.284 "listen_address": { 00:16:03.284 "trtype": "TCP", 00:16:03.284 "adrfam": "IPv4", 00:16:03.284 "traddr": "10.0.0.2", 00:16:03.284 "trsvcid": "4420" 00:16:03.284 }, 00:16:03.284 "peer_address": { 00:16:03.284 "trtype": "TCP", 00:16:03.284 "adrfam": "IPv4", 00:16:03.284 "traddr": "10.0.0.1", 00:16:03.284 "trsvcid": "36542" 00:16:03.284 }, 00:16:03.284 "auth": { 00:16:03.284 "state": "completed", 00:16:03.284 "digest": "sha512", 00:16:03.284 "dhgroup": "ffdhe4096" 00:16:03.284 } 00:16:03.284 } 00:16:03.284 ]' 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.284 21:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.543 21:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:16:04.479 21:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.479 21:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:04.479 21:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.479 21:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 21:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.479 21:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.480 21:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:04.480 21:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.738 21:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.739 21:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.739 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.739 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.998 00:16:04.998 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.998 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.998 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.256 { 00:16:05.256 "cntlid": 127, 00:16:05.256 "qid": 0, 00:16:05.256 "state": "enabled", 00:16:05.256 "thread": "nvmf_tgt_poll_group_000", 00:16:05.256 "listen_address": { 00:16:05.256 "trtype": "TCP", 00:16:05.256 "adrfam": "IPv4", 00:16:05.256 "traddr": "10.0.0.2", 00:16:05.256 "trsvcid": "4420" 00:16:05.256 }, 00:16:05.256 "peer_address": { 00:16:05.256 "trtype": "TCP", 00:16:05.256 "adrfam": "IPv4", 00:16:05.256 "traddr": "10.0.0.1", 00:16:05.256 "trsvcid": "36568" 00:16:05.256 }, 00:16:05.256 "auth": { 00:16:05.256 "state": "completed", 00:16:05.256 "digest": "sha512", 00:16:05.256 "dhgroup": "ffdhe4096" 00:16:05.256 } 00:16:05.256 } 00:16:05.256 ]' 00:16:05.256 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.515 21:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.774 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:16:06.341 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.600 21:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.600 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.167 00:16:07.168 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.168 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.168 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.436 { 00:16:07.436 "cntlid": 129, 00:16:07.436 "qid": 0, 00:16:07.436 "state": "enabled", 00:16:07.436 "thread": "nvmf_tgt_poll_group_000", 00:16:07.436 "listen_address": { 00:16:07.436 "trtype": "TCP", 00:16:07.436 "adrfam": "IPv4", 00:16:07.436 "traddr": "10.0.0.2", 00:16:07.436 "trsvcid": "4420" 00:16:07.436 }, 00:16:07.436 "peer_address": { 00:16:07.436 "trtype": "TCP", 00:16:07.436 "adrfam": "IPv4", 00:16:07.436 "traddr": "10.0.0.1", 00:16:07.436 "trsvcid": "36590" 00:16:07.436 }, 00:16:07.436 "auth": { 00:16:07.436 "state": "completed", 00:16:07.436 "digest": "sha512", 00:16:07.436 "dhgroup": "ffdhe6144" 00:16:07.436 } 00:16:07.436 } 00:16:07.436 ]' 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.436 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.695 21:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.695 21:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.695 21:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.695 21:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.695 21:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.954 21:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.522 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.781 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.348 00:16:09.348 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.348 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.348 21:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.607 { 00:16:09.607 "cntlid": 131, 00:16:09.607 "qid": 0, 00:16:09.607 "state": "enabled", 00:16:09.607 "thread": "nvmf_tgt_poll_group_000", 00:16:09.607 "listen_address": { 00:16:09.607 "trtype": "TCP", 00:16:09.607 "adrfam": "IPv4", 00:16:09.607 "traddr": "10.0.0.2", 00:16:09.607 "trsvcid": "4420" 00:16:09.607 }, 00:16:09.607 "peer_address": { 00:16:09.607 "trtype": "TCP", 00:16:09.607 "adrfam": "IPv4", 00:16:09.607 "traddr": "10.0.0.1", 00:16:09.607 "trsvcid": "36606" 00:16:09.607 }, 00:16:09.607 "auth": { 00:16:09.607 "state": "completed", 00:16:09.607 "digest": "sha512", 00:16:09.607 "dhgroup": "ffdhe6144" 00:16:09.607 } 00:16:09.607 } 00:16:09.607 ]' 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.607 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.866 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.866 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.866 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.866 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.866 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.866 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.137 21:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:16:10.703 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:10.962 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.221 21:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.800 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.800 { 00:16:11.800 "cntlid": 133, 00:16:11.800 "qid": 0, 00:16:11.800 "state": "enabled", 00:16:11.800 "thread": "nvmf_tgt_poll_group_000", 00:16:11.800 "listen_address": { 00:16:11.800 "trtype": "TCP", 00:16:11.800 "adrfam": "IPv4", 00:16:11.800 "traddr": "10.0.0.2", 00:16:11.800 "trsvcid": "4420" 00:16:11.800 }, 00:16:11.800 "peer_address": { 00:16:11.800 "trtype": "TCP", 00:16:11.800 "adrfam": "IPv4", 00:16:11.800 "traddr": "10.0.0.1", 00:16:11.800 "trsvcid": "37690" 00:16:11.800 }, 00:16:11.800 "auth": { 00:16:11.800 "state": "completed", 00:16:11.800 "digest": "sha512", 00:16:11.800 "dhgroup": "ffdhe6144" 00:16:11.800 } 00:16:11.800 } 00:16:11.800 ]' 00:16:11.800 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.059 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.316 21:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.252 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.510 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:13.510 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.510 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.510 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:13.510 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.510 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.511 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:16:13.511 21:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.511 21:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.511 21:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.511 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.511 21:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.768 00:16:14.026 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.026 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.026 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.283 { 00:16:14.283 "cntlid": 135, 00:16:14.283 "qid": 0, 00:16:14.283 "state": "enabled", 00:16:14.283 "thread": "nvmf_tgt_poll_group_000", 00:16:14.283 "listen_address": { 00:16:14.283 "trtype": "TCP", 00:16:14.283 "adrfam": "IPv4", 00:16:14.283 "traddr": "10.0.0.2", 00:16:14.283 "trsvcid": "4420" 00:16:14.283 }, 00:16:14.283 "peer_address": { 00:16:14.283 "trtype": "TCP", 00:16:14.283 "adrfam": "IPv4", 00:16:14.283 "traddr": "10.0.0.1", 00:16:14.283 "trsvcid": "37718" 00:16:14.283 }, 00:16:14.283 "auth": { 00:16:14.283 "state": "completed", 00:16:14.283 "digest": "sha512", 00:16:14.283 "dhgroup": "ffdhe6144" 00:16:14.283 } 00:16:14.283 } 00:16:14.283 ]' 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.283 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.284 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.284 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.284 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.284 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.284 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.284 21:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.541 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.474 21:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.732 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.297 00:16:16.297 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.297 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.297 21:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.554 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.555 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.555 21:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.555 21:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.555 21:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.555 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.555 { 00:16:16.555 "cntlid": 137, 00:16:16.555 "qid": 0, 00:16:16.555 "state": "enabled", 00:16:16.555 "thread": "nvmf_tgt_poll_group_000", 00:16:16.555 "listen_address": { 00:16:16.555 "trtype": "TCP", 00:16:16.555 "adrfam": "IPv4", 00:16:16.555 "traddr": "10.0.0.2", 00:16:16.555 "trsvcid": "4420" 00:16:16.555 }, 00:16:16.555 "peer_address": { 00:16:16.555 "trtype": "TCP", 00:16:16.555 "adrfam": "IPv4", 00:16:16.555 "traddr": "10.0.0.1", 00:16:16.555 "trsvcid": "37732" 00:16:16.555 }, 00:16:16.555 "auth": { 00:16:16.555 "state": "completed", 00:16:16.555 "digest": "sha512", 00:16:16.555 "dhgroup": "ffdhe8192" 00:16:16.555 } 00:16:16.555 } 00:16:16.555 ]' 00:16:16.555 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.812 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.069 21:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.002 21:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.260 21:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.260 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.260 21:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.826 00:16:18.826 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.826 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.826 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.084 { 00:16:19.084 "cntlid": 139, 00:16:19.084 "qid": 0, 00:16:19.084 "state": "enabled", 00:16:19.084 "thread": "nvmf_tgt_poll_group_000", 00:16:19.084 "listen_address": { 00:16:19.084 "trtype": "TCP", 00:16:19.084 "adrfam": "IPv4", 00:16:19.084 "traddr": "10.0.0.2", 00:16:19.084 "trsvcid": "4420" 00:16:19.084 }, 00:16:19.084 "peer_address": { 00:16:19.084 "trtype": "TCP", 00:16:19.084 "adrfam": "IPv4", 00:16:19.084 "traddr": "10.0.0.1", 00:16:19.084 "trsvcid": "37758" 00:16:19.084 }, 00:16:19.084 "auth": { 00:16:19.084 "state": "completed", 00:16:19.084 "digest": "sha512", 00:16:19.084 "dhgroup": "ffdhe8192" 00:16:19.084 } 00:16:19.084 } 00:16:19.084 ]' 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.084 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.342 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.342 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.342 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.342 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.342 21:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.600 21:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:01:YmY0MmZiMDZjY2EzMmI1ZWYyOTI2NWRjNTE4NTFhODCJCwCw: --dhchap-ctrl-secret DHHC-1:02:MTk1MjM0MDNhZjQxZDliODFmOTUzNjcyYWVlYTdhZDFiYmNlZjcyN2NlZmI2MWI0s2rFzA==: 00:16:20.535 21:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.536 21:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.536 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.102 00:16:21.102 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.102 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.102 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.670 { 00:16:21.670 "cntlid": 141, 00:16:21.670 "qid": 0, 00:16:21.670 "state": "enabled", 00:16:21.670 "thread": "nvmf_tgt_poll_group_000", 00:16:21.670 "listen_address": { 00:16:21.670 "trtype": "TCP", 00:16:21.670 "adrfam": "IPv4", 00:16:21.670 "traddr": "10.0.0.2", 00:16:21.670 "trsvcid": "4420" 00:16:21.670 }, 00:16:21.670 "peer_address": { 00:16:21.670 "trtype": "TCP", 00:16:21.670 "adrfam": "IPv4", 00:16:21.670 "traddr": "10.0.0.1", 00:16:21.670 "trsvcid": "44502" 00:16:21.670 }, 00:16:21.670 "auth": { 00:16:21.670 "state": "completed", 00:16:21.670 "digest": "sha512", 00:16:21.670 "dhgroup": "ffdhe8192" 00:16:21.670 } 00:16:21.670 } 00:16:21.670 ]' 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.670 21:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.670 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.670 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.670 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.670 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.670 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.928 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:02:NWMzOTAzNmYyODkxYjJhNmVjODIzNjUzZWQ0ZWUzZTM5MmU2MDY3MjEzNTBiMTJi9jB05A==: --dhchap-ctrl-secret DHHC-1:01:NzA5ODAxZGM3YTU3ZWIyY2Y0YTI2YjFjZjliZjEyMWKE5Nh9: 00:16:22.494 21:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.494 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.752 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:22.752 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.009 21:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.575 00:16:23.575 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.575 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.575 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.833 { 00:16:23.833 "cntlid": 143, 00:16:23.833 "qid": 0, 00:16:23.833 "state": "enabled", 00:16:23.833 "thread": "nvmf_tgt_poll_group_000", 00:16:23.833 "listen_address": { 00:16:23.833 "trtype": "TCP", 00:16:23.833 "adrfam": "IPv4", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "trsvcid": "4420" 00:16:23.833 }, 00:16:23.833 "peer_address": { 00:16:23.833 "trtype": "TCP", 00:16:23.833 "adrfam": "IPv4", 00:16:23.833 "traddr": "10.0.0.1", 00:16:23.833 "trsvcid": "44526" 00:16:23.833 }, 00:16:23.833 "auth": { 00:16:23.833 "state": "completed", 00:16:23.833 "digest": "sha512", 00:16:23.833 "dhgroup": "ffdhe8192" 00:16:23.833 } 00:16:23.833 } 00:16:23.833 ]' 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.833 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.834 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.092 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.092 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.092 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.092 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.092 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.349 21:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:16:25.283 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.283 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:25.283 21:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.283 21:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.284 21:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.234 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.234 { 00:16:26.234 "cntlid": 145, 00:16:26.234 "qid": 0, 00:16:26.234 "state": "enabled", 00:16:26.234 "thread": "nvmf_tgt_poll_group_000", 00:16:26.234 "listen_address": { 00:16:26.234 "trtype": "TCP", 00:16:26.234 "adrfam": "IPv4", 00:16:26.234 "traddr": "10.0.0.2", 00:16:26.234 "trsvcid": "4420" 00:16:26.234 }, 00:16:26.234 "peer_address": { 00:16:26.234 "trtype": "TCP", 00:16:26.234 "adrfam": "IPv4", 00:16:26.234 "traddr": "10.0.0.1", 00:16:26.234 "trsvcid": "44544" 00:16:26.234 }, 00:16:26.234 "auth": { 00:16:26.234 "state": "completed", 00:16:26.234 "digest": "sha512", 00:16:26.234 "dhgroup": "ffdhe8192" 00:16:26.234 } 00:16:26.234 } 00:16:26.234 ]' 00:16:26.234 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.507 21:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.768 21:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:00:MDc5YjMxOWRlMGZiOGI2OTcxNjNmOTk0NzYwYzVhMGI4NzIyMGJkNDE1NTgxYzdjxX7cRQ==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZjAyNmVjNTYyMDMzZGM1OTk5NWFjNGE2MGI4NDQyNTZjODIxNDEyNjkzMzY5Mjk5OGQxNjE1NmM2OTUxYTRxB3g=: 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.333 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:27.590 21:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:28.155 request: 00:16:28.155 { 00:16:28.155 "name": "nvme0", 00:16:28.155 "trtype": "tcp", 00:16:28.155 "traddr": "10.0.0.2", 00:16:28.155 "adrfam": "ipv4", 00:16:28.155 "trsvcid": "4420", 00:16:28.155 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:28.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291", 00:16:28.155 "prchk_reftag": false, 00:16:28.155 "prchk_guard": false, 00:16:28.155 "hdgst": false, 00:16:28.155 "ddgst": false, 00:16:28.155 "dhchap_key": "key2", 00:16:28.155 "method": "bdev_nvme_attach_controller", 00:16:28.155 "req_id": 1 00:16:28.155 } 00:16:28.155 Got JSON-RPC error response 00:16:28.155 response: 00:16:28.155 { 00:16:28.155 "code": -5, 00:16:28.155 "message": "Input/output error" 00:16:28.155 } 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.155 21:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.156 21:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.720 request: 00:16:28.720 { 00:16:28.720 "name": "nvme0", 00:16:28.720 "trtype": "tcp", 00:16:28.720 "traddr": "10.0.0.2", 00:16:28.720 "adrfam": "ipv4", 00:16:28.720 "trsvcid": "4420", 00:16:28.720 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:28.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291", 00:16:28.720 "prchk_reftag": false, 00:16:28.720 "prchk_guard": false, 00:16:28.720 "hdgst": false, 00:16:28.720 "ddgst": false, 00:16:28.720 "dhchap_key": "key1", 00:16:28.720 "dhchap_ctrlr_key": "ckey2", 00:16:28.720 "method": "bdev_nvme_attach_controller", 00:16:28.720 "req_id": 1 00:16:28.720 } 00:16:28.720 Got JSON-RPC error response 00:16:28.720 response: 00:16:28.720 { 00:16:28.720 "code": -5, 00:16:28.720 "message": "Input/output error" 00:16:28.720 } 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key1 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.720 21:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.285 request: 00:16:29.285 { 00:16:29.285 "name": "nvme0", 00:16:29.285 "trtype": "tcp", 00:16:29.285 "traddr": "10.0.0.2", 00:16:29.285 "adrfam": "ipv4", 00:16:29.285 "trsvcid": "4420", 00:16:29.285 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:29.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291", 00:16:29.285 "prchk_reftag": false, 00:16:29.285 "prchk_guard": false, 00:16:29.285 "hdgst": false, 00:16:29.285 "ddgst": false, 00:16:29.285 "dhchap_key": "key1", 00:16:29.285 "dhchap_ctrlr_key": "ckey1", 00:16:29.285 "method": "bdev_nvme_attach_controller", 00:16:29.285 "req_id": 1 00:16:29.285 } 00:16:29.285 Got JSON-RPC error response 00:16:29.285 response: 00:16:29.285 { 00:16:29.285 "code": -5, 00:16:29.285 "message": "Input/output error" 00:16:29.285 } 00:16:29.285 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 72110 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72110 ']' 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72110 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72110 00:16:29.286 killing process with pid 72110 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72110' 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72110 00:16:29.286 21:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72110 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=75090 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 75090 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75090 ']' 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.661 21:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:31.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 75090 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75090 ']' 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.597 21:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.597 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.597 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:31.597 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:31.597 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.597 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.162 21:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.727 00:16:32.727 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.727 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.727 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.985 { 00:16:32.985 "cntlid": 1, 00:16:32.985 "qid": 0, 00:16:32.985 "state": "enabled", 00:16:32.985 "thread": "nvmf_tgt_poll_group_000", 00:16:32.985 "listen_address": { 00:16:32.985 "trtype": "TCP", 00:16:32.985 "adrfam": "IPv4", 00:16:32.985 "traddr": "10.0.0.2", 00:16:32.985 "trsvcid": "4420" 00:16:32.985 }, 00:16:32.985 "peer_address": { 00:16:32.985 "trtype": "TCP", 00:16:32.985 "adrfam": "IPv4", 00:16:32.985 "traddr": "10.0.0.1", 00:16:32.985 "trsvcid": "56902" 00:16:32.985 }, 00:16:32.985 "auth": { 00:16:32.985 "state": "completed", 00:16:32.985 "digest": "sha512", 00:16:32.985 "dhgroup": "ffdhe8192" 00:16:32.985 } 00:16:32.985 } 00:16:32.985 ]' 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.985 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.243 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.243 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.243 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.244 21:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-secret DHHC-1:03:YThlNjkzZDE2MTAyMDAzY2NlMDgyODExMzNiOTQ4NjI0YTU1OTY5ODcyMTI5OTFiNjgxZWVlOTBjNjExOTE5NRFseEs=: 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --dhchap-key key3 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.179 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.438 request: 00:16:34.438 { 00:16:34.438 "name": "nvme0", 00:16:34.438 "trtype": "tcp", 00:16:34.438 "traddr": "10.0.0.2", 00:16:34.438 "adrfam": "ipv4", 00:16:34.438 "trsvcid": "4420", 00:16:34.438 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291", 00:16:34.438 "prchk_reftag": false, 00:16:34.438 "prchk_guard": false, 00:16:34.438 "hdgst": false, 00:16:34.438 "ddgst": false, 00:16:34.438 "dhchap_key": "key3", 00:16:34.438 "method": "bdev_nvme_attach_controller", 00:16:34.438 "req_id": 1 00:16:34.438 } 00:16:34.438 Got JSON-RPC error response 00:16:34.438 response: 00:16:34.438 { 00:16:34.438 "code": -5, 00:16:34.438 "message": "Input/output error" 00:16:34.438 } 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:34.698 21:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.698 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.957 request: 00:16:34.957 { 00:16:34.957 "name": "nvme0", 00:16:34.957 "trtype": "tcp", 00:16:34.957 "traddr": "10.0.0.2", 00:16:34.957 "adrfam": "ipv4", 00:16:34.957 "trsvcid": "4420", 00:16:34.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291", 00:16:34.957 "prchk_reftag": false, 00:16:34.957 "prchk_guard": false, 00:16:34.957 "hdgst": false, 00:16:34.957 "ddgst": false, 00:16:34.957 "dhchap_key": "key3", 00:16:34.957 "method": "bdev_nvme_attach_controller", 00:16:34.957 "req_id": 1 00:16:34.957 } 00:16:34.957 Got JSON-RPC error response 00:16:34.957 response: 00:16:34.957 { 00:16:34.957 "code": -5, 00:16:34.957 "message": "Input/output error" 00:16:34.957 } 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.216 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:35.475 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.475 21:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:35.475 21:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:35.733 request: 00:16:35.733 { 00:16:35.733 "name": "nvme0", 00:16:35.733 "trtype": "tcp", 00:16:35.733 "traddr": "10.0.0.2", 00:16:35.733 "adrfam": "ipv4", 00:16:35.733 "trsvcid": "4420", 00:16:35.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:35.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291", 00:16:35.733 "prchk_reftag": false, 00:16:35.733 "prchk_guard": false, 00:16:35.733 "hdgst": false, 00:16:35.733 "ddgst": false, 00:16:35.733 "dhchap_key": "key0", 00:16:35.733 "dhchap_ctrlr_key": "key1", 00:16:35.733 "method": "bdev_nvme_attach_controller", 00:16:35.733 "req_id": 1 00:16:35.733 } 00:16:35.733 Got JSON-RPC error response 00:16:35.733 response: 00:16:35.733 { 00:16:35.733 "code": -5, 00:16:35.733 "message": "Input/output error" 00:16:35.733 } 00:16:35.733 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:35.733 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:35.733 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:35.733 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:35.733 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:35.733 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:35.991 00:16:35.991 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:35.991 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:35.991 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.249 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.249 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.249 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 72141 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72141 ']' 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72141 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72141 00:16:36.507 killing process with pid 72141 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72141' 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72141 00:16:36.507 21:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72141 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.406 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.406 rmmod nvme_tcp 00:16:38.406 rmmod nvme_fabrics 00:16:38.664 rmmod nvme_keyring 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 75090 ']' 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 75090 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 75090 ']' 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 75090 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.664 21:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75090 00:16:38.664 killing process with pid 75090 00:16:38.664 21:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.664 21:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.664 21:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75090' 00:16:38.664 21:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 75090 00:16:38.664 21:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 75090 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zVi /tmp/spdk.key-sha256.FNH /tmp/spdk.key-sha384.PZo /tmp/spdk.key-sha512.gsJ /tmp/spdk.key-sha512.5oN /tmp/spdk.key-sha384.oKJ /tmp/spdk.key-sha256.DCE '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:39.599 00:16:39.599 real 2m48.818s 00:16:39.599 user 6m40.626s 00:16:39.599 sys 0m23.888s 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.599 ************************************ 00:16:39.599 END TEST nvmf_auth_target 00:16:39.599 21:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 ************************************ 00:16:39.599 21:15:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.599 21:15:51 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:39.599 21:15:51 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:39.599 21:15:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:39.599 21:15:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.599 21:15:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 ************************************ 00:16:39.599 START TEST nvmf_bdevio_no_huge 00:16:39.599 ************************************ 00:16:39.599 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:39.857 * Looking for test storage... 00:16:39.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:39.857 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.857 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:39.858 Cannot find device "nvmf_tgt_br" 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.858 Cannot find device "nvmf_tgt_br2" 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:39.858 Cannot find device "nvmf_tgt_br" 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:39.858 Cannot find device "nvmf_tgt_br2" 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.858 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:40.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:40.116 00:16:40.116 --- 10.0.0.2 ping statistics --- 00:16:40.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.116 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:40.116 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.116 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:40.116 00:16:40.116 --- 10.0.0.3 ping statistics --- 00:16:40.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.116 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:40.116 00:16:40.116 --- 10.0.0.1 ping statistics --- 00:16:40.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.116 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.116 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=75438 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 75438 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 75438 ']' 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.117 21:15:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:40.374 [2024-07-14 21:15:51.678694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:40.374 [2024-07-14 21:15:51.678879] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:40.374 [2024-07-14 21:15:51.883863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.632 [2024-07-14 21:15:52.165174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.632 [2024-07-14 21:15:52.165230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.632 [2024-07-14 21:15:52.165260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.632 [2024-07-14 21:15:52.165279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.632 [2024-07-14 21:15:52.165296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.632 [2024-07-14 21:15:52.165472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.632 [2024-07-14 21:15:52.166257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:40.632 [2024-07-14 21:15:52.166403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.632 [2024-07-14 21:15:52.166415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:40.890 [2024-07-14 21:15:52.308984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.149 [2024-07-14 21:15:52.646327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.149 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 Malloc0 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 [2024-07-14 21:15:52.739277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.408 { 00:16:41.408 "params": { 00:16:41.408 "name": "Nvme$subsystem", 00:16:41.408 "trtype": "$TEST_TRANSPORT", 00:16:41.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.408 "adrfam": "ipv4", 00:16:41.408 "trsvcid": "$NVMF_PORT", 00:16:41.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.408 "hdgst": ${hdgst:-false}, 00:16:41.408 "ddgst": ${ddgst:-false} 00:16:41.408 }, 00:16:41.408 "method": "bdev_nvme_attach_controller" 00:16:41.408 } 00:16:41.408 EOF 00:16:41.408 )") 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:41.408 21:15:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:41.408 "params": { 00:16:41.408 "name": "Nvme1", 00:16:41.408 "trtype": "tcp", 00:16:41.408 "traddr": "10.0.0.2", 00:16:41.408 "adrfam": "ipv4", 00:16:41.408 "trsvcid": "4420", 00:16:41.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.408 "hdgst": false, 00:16:41.408 "ddgst": false 00:16:41.408 }, 00:16:41.408 "method": "bdev_nvme_attach_controller" 00:16:41.408 }' 00:16:41.408 [2024-07-14 21:15:52.847240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:41.408 [2024-07-14 21:15:52.848135] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75474 ] 00:16:41.666 [2024-07-14 21:15:53.049287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.923 [2024-07-14 21:15:53.277894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.923 [2024-07-14 21:15:53.278012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.923 [2024-07-14 21:15:53.278034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.923 [2024-07-14 21:15:53.444532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:42.180 I/O targets: 00:16:42.180 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:42.180 00:16:42.180 00:16:42.180 CUnit - A unit testing framework for C - Version 2.1-3 00:16:42.180 http://cunit.sourceforge.net/ 00:16:42.180 00:16:42.180 00:16:42.180 Suite: bdevio tests on: Nvme1n1 00:16:42.180 Test: blockdev write read block ...passed 00:16:42.180 Test: blockdev write zeroes read block ...passed 00:16:42.180 Test: blockdev write zeroes read no split ...passed 00:16:42.180 Test: blockdev write zeroes read split ...passed 00:16:42.437 Test: blockdev write zeroes read split partial ...passed 00:16:42.437 Test: blockdev reset ...[2024-07-14 21:15:53.732223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:42.437 [2024-07-14 21:15:53.732404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:42.437 passed 00:16:42.437 Test: blockdev write read 8 blocks ...[2024-07-14 21:15:53.746766] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:42.437 passed 00:16:42.437 Test: blockdev write read size > 128k ...passed 00:16:42.437 Test: blockdev write read invalid size ...passed 00:16:42.437 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:42.437 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:42.437 Test: blockdev write read max offset ...passed 00:16:42.437 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:42.437 Test: blockdev writev readv 8 blocks ...passed 00:16:42.437 Test: blockdev writev readv 30 x 1block ...passed 00:16:42.437 Test: blockdev writev readv block ...passed 00:16:42.437 Test: blockdev writev readv size > 128k ...passed 00:16:42.437 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:42.437 Test: blockdev comparev and writev ...[2024-07-14 21:15:53.759979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.760296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.760437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.760556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.761139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.761248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.761384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.761609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.762102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.762343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.762619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.762912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.763388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.763611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.763917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:42.437 [2024-07-14 21:15:53.764195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 passed 00:16:42.437 Test: blockdev nvme passthru rw ...cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.437 passed 00:16:42.437 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:15:53.765440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.437 [2024-07-14 21:15:53.765568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.765944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.437 [2024-07-14 21:15:53.766046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.766265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.437 [2024-07-14 21:15:53.766460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.437 [2024-07-14 21:15:53.766837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:42.437 [2024-07-14 21:15:53.767061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.437 passed 00:16:42.437 Test: blockdev nvme admin passthru ...passed 00:16:42.437 Test: blockdev copy ...passed 00:16:42.437 00:16:42.437 Run Summary: Type Total Ran Passed Failed Inactive 00:16:42.437 suites 1 1 n/a 0 0 00:16:42.437 tests 23 23 23 0 0 00:16:42.437 asserts 152 152 152 0 n/a 00:16:42.437 00:16:42.437 Elapsed time = 0.243 seconds 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.038 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.038 rmmod nvme_tcp 00:16:43.038 rmmod nvme_fabrics 00:16:43.303 rmmod nvme_keyring 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 75438 ']' 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 75438 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 75438 ']' 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 75438 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75438 00:16:43.303 killing process with pid 75438 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75438' 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 75438 00:16:43.303 21:15:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 75438 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:44.240 00:16:44.240 real 0m4.361s 00:16:44.240 user 0m15.358s 00:16:44.240 sys 0m1.368s 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.240 ************************************ 00:16:44.240 END TEST nvmf_bdevio_no_huge 00:16:44.240 ************************************ 00:16:44.240 21:15:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:44.240 21:15:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.240 21:15:55 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:44.240 21:15:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.240 21:15:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.240 21:15:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.240 ************************************ 00:16:44.240 START TEST nvmf_tls 00:16:44.240 ************************************ 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:44.240 * Looking for test storage... 00:16:44.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.240 21:15:55 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:44.241 Cannot find device "nvmf_tgt_br" 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.241 Cannot find device "nvmf_tgt_br2" 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:44.241 Cannot find device "nvmf_tgt_br" 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:44.241 Cannot find device "nvmf_tgt_br2" 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.241 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:44.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:44.501 00:16:44.501 --- 10.0.0.2 ping statistics --- 00:16:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.501 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:44.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:44.501 00:16:44.501 --- 10.0.0.3 ping statistics --- 00:16:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.501 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:44.501 00:16:44.501 --- 10.0.0.1 ping statistics --- 00:16:44.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.501 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=75668 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 75668 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75668 ']' 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.501 21:15:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.759 [2024-07-14 21:15:56.087874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:44.759 [2024-07-14 21:15:56.088055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.759 [2024-07-14 21:15:56.268405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.017 [2024-07-14 21:15:56.500261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.017 [2024-07-14 21:15:56.500334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.017 [2024-07-14 21:15:56.500355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.017 [2024-07-14 21:15:56.500383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.017 [2024-07-14 21:15:56.500396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.018 [2024-07-14 21:15:56.500464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:45.584 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:45.843 true 00:16:45.843 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:45.843 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.101 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:46.101 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:46.101 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:46.360 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:46.360 21:15:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.618 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:46.618 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:46.618 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:46.876 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.876 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:47.135 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:47.135 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:47.135 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:47.135 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:47.393 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:47.393 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:47.393 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:47.652 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:47.652 21:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:47.652 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:47.652 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:47.652 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:47.911 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:47.911 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.n53SKgSHfH 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.CVkDLvU6mb 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.n53SKgSHfH 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.CVkDLvU6mb 00:16:48.479 21:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:48.738 21:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:48.997 [2024-07-14 21:16:00.471688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:49.256 21:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.n53SKgSHfH 00:16:49.256 21:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n53SKgSHfH 00:16:49.256 21:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:49.514 [2024-07-14 21:16:00.817731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.514 21:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:49.514 21:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:49.772 [2024-07-14 21:16:01.237864] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.772 [2024-07-14 21:16:01.238150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.772 21:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:50.029 malloc0 00:16:50.029 21:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:50.285 21:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n53SKgSHfH 00:16:50.543 [2024-07-14 21:16:01.969696] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:50.543 21:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.n53SKgSHfH 00:17:02.743 Initializing NVMe Controllers 00:17:02.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:02.743 Initialization complete. Launching workers. 00:17:02.743 ======================================================== 00:17:02.743 Latency(us) 00:17:02.743 Device Information : IOPS MiB/s Average min max 00:17:02.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7075.14 27.64 9048.67 1792.53 11368.45 00:17:02.743 ======================================================== 00:17:02.743 Total : 7075.14 27.64 9048.67 1792.53 11368.45 00:17:02.743 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53SKgSHfH 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n53SKgSHfH' 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75907 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75907 /var/tmp/bdevperf.sock 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75907 ']' 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.743 21:16:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.743 [2024-07-14 21:16:12.401499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:02.743 [2024-07-14 21:16:12.401682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75907 ] 00:17:02.743 [2024-07-14 21:16:12.573987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.743 [2024-07-14 21:16:12.797549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.743 [2024-07-14 21:16:12.964230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:02.743 21:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.743 21:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:02.743 21:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n53SKgSHfH 00:17:02.743 [2024-07-14 21:16:13.475979] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.743 [2024-07-14 21:16:13.476187] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:02.743 TLSTESTn1 00:17:02.743 21:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:02.743 Running I/O for 10 seconds... 00:17:12.776 00:17:12.776 Latency(us) 00:17:12.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.776 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:12.776 Verification LBA range: start 0x0 length 0x2000 00:17:12.776 TLSTESTn1 : 10.02 3156.44 12.33 0.00 0.00 40471.16 8221.79 30146.56 00:17:12.776 =================================================================================================================== 00:17:12.776 Total : 3156.44 12.33 0.00 0.00 40471.16 8221.79 30146.56 00:17:12.776 0 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 75907 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75907 ']' 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75907 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75907 00:17:12.776 killing process with pid 75907 00:17:12.776 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.776 00:17:12.776 Latency(us) 00:17:12.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.776 =================================================================================================================== 00:17:12.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75907' 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75907 00:17:12.776 [2024-07-14 21:16:23.738328] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:12.776 21:16:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75907 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CVkDLvU6mb 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CVkDLvU6mb 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CVkDLvU6mb 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CVkDLvU6mb' 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76047 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76047 /var/tmp/bdevperf.sock 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76047 ']' 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.342 21:16:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.601 [2024-07-14 21:16:24.943055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:13.601 [2024-07-14 21:16:24.943264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76047 ] 00:17:13.601 [2024-07-14 21:16:25.115173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.859 [2024-07-14 21:16:25.289576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.119 [2024-07-14 21:16:25.465324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:14.377 21:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.377 21:16:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:14.377 21:16:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CVkDLvU6mb 00:17:14.636 [2024-07-14 21:16:26.114994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.636 [2024-07-14 21:16:26.115208] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:14.636 [2024-07-14 21:16:26.128170] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.636 [2024-07-14 21:16:26.128952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:14.636 [2024-07-14 21:16:26.129924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:14.636 [2024-07-14 21:16:26.130910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:14.636 [2024-07-14 21:16:26.130958] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.636 [2024-07-14 21:16:26.130981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:14.636 request: 00:17:14.636 { 00:17:14.636 "name": "TLSTEST", 00:17:14.636 "trtype": "tcp", 00:17:14.636 "traddr": "10.0.0.2", 00:17:14.636 "adrfam": "ipv4", 00:17:14.636 "trsvcid": "4420", 00:17:14.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.636 "prchk_reftag": false, 00:17:14.636 "prchk_guard": false, 00:17:14.636 "hdgst": false, 00:17:14.636 "ddgst": false, 00:17:14.636 "psk": "/tmp/tmp.CVkDLvU6mb", 00:17:14.636 "method": "bdev_nvme_attach_controller", 00:17:14.636 "req_id": 1 00:17:14.636 } 00:17:14.636 Got JSON-RPC error response 00:17:14.636 response: 00:17:14.636 { 00:17:14.636 "code": -5, 00:17:14.636 "message": "Input/output error" 00:17:14.636 } 00:17:14.636 21:16:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76047 00:17:14.636 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76047 ']' 00:17:14.636 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76047 00:17:14.636 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:14.636 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.636 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76047 00:17:14.636 killing process with pid 76047 00:17:14.636 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.636 00:17:14.636 Latency(us) 00:17:14.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.637 =================================================================================================================== 00:17:14.637 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.637 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:14.637 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:14.637 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76047' 00:17:14.637 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76047 00:17:14.637 [2024-07-14 21:16:26.181311] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:14.637 21:16:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76047 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.n53SKgSHfH 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.n53SKgSHfH 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:16.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.n53SKgSHfH 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n53SKgSHfH' 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76087 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76087 /var/tmp/bdevperf.sock 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76087 ']' 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.014 21:16:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.014 [2024-07-14 21:16:27.253366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:16.014 [2024-07-14 21:16:27.253537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76087 ] 00:17:16.014 [2024-07-14 21:16:27.411066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.274 [2024-07-14 21:16:27.585857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.274 [2024-07-14 21:16:27.751720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:16.842 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.842 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:16.842 21:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.n53SKgSHfH 00:17:17.100 [2024-07-14 21:16:28.460730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.101 [2024-07-14 21:16:28.460949] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:17.101 [2024-07-14 21:16:28.471962] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:17.101 [2024-07-14 21:16:28.472027] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:17.101 [2024-07-14 21:16:28.472118] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.101 [2024-07-14 21:16:28.472755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:17.101 [2024-07-14 21:16:28.473716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:17.101 [2024-07-14 21:16:28.474719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:17.101 [2024-07-14 21:16:28.474798] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.101 [2024-07-14 21:16:28.474822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.101 request: 00:17:17.101 { 00:17:17.101 "name": "TLSTEST", 00:17:17.101 "trtype": "tcp", 00:17:17.101 "traddr": "10.0.0.2", 00:17:17.101 "adrfam": "ipv4", 00:17:17.101 "trsvcid": "4420", 00:17:17.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.101 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:17.101 "prchk_reftag": false, 00:17:17.101 "prchk_guard": false, 00:17:17.101 "hdgst": false, 00:17:17.101 "ddgst": false, 00:17:17.101 "psk": "/tmp/tmp.n53SKgSHfH", 00:17:17.101 "method": "bdev_nvme_attach_controller", 00:17:17.101 "req_id": 1 00:17:17.101 } 00:17:17.101 Got JSON-RPC error response 00:17:17.101 response: 00:17:17.101 { 00:17:17.101 "code": -5, 00:17:17.101 "message": "Input/output error" 00:17:17.101 } 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76087 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76087 ']' 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76087 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76087 00:17:17.101 killing process with pid 76087 00:17:17.101 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.101 00:17:17.101 Latency(us) 00:17:17.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.101 =================================================================================================================== 00:17:17.101 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76087' 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76087 00:17:17.101 [2024-07-14 21:16:28.522947] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:17.101 21:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76087 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53SKgSHfH 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53SKgSHfH 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.n53SKgSHfH 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n53SKgSHfH' 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76121 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76121 /var/tmp/bdevperf.sock 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76121 ']' 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.036 21:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.294 [2024-07-14 21:16:29.631660] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:18.294 [2024-07-14 21:16:29.631860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76121 ] 00:17:18.294 [2024-07-14 21:16:29.800405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.552 [2024-07-14 21:16:29.955040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.810 [2024-07-14 21:16:30.124845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:19.069 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.069 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:19.069 21:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n53SKgSHfH 00:17:19.329 [2024-07-14 21:16:30.722467] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.329 [2024-07-14 21:16:30.722673] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:19.329 [2024-07-14 21:16:30.732398] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:19.329 [2024-07-14 21:16:30.732493] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:19.329 [2024-07-14 21:16:30.732602] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.329 [2024-07-14 21:16:30.732869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:19.329 [2024-07-14 21:16:30.733846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:19.329 [2024-07-14 21:16:30.734836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:19.329 [2024-07-14 21:16:30.734898] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.329 [2024-07-14 21:16:30.734916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:19.329 request: 00:17:19.329 { 00:17:19.329 "name": "TLSTEST", 00:17:19.329 "trtype": "tcp", 00:17:19.329 "traddr": "10.0.0.2", 00:17:19.329 "adrfam": "ipv4", 00:17:19.329 "trsvcid": "4420", 00:17:19.329 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:19.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.329 "prchk_reftag": false, 00:17:19.329 "prchk_guard": false, 00:17:19.329 "hdgst": false, 00:17:19.329 "ddgst": false, 00:17:19.329 "psk": "/tmp/tmp.n53SKgSHfH", 00:17:19.329 "method": "bdev_nvme_attach_controller", 00:17:19.329 "req_id": 1 00:17:19.329 } 00:17:19.329 Got JSON-RPC error response 00:17:19.329 response: 00:17:19.329 { 00:17:19.329 "code": -5, 00:17:19.329 "message": "Input/output error" 00:17:19.329 } 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76121 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76121 ']' 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76121 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76121 00:17:19.329 killing process with pid 76121 00:17:19.329 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.329 00:17:19.329 Latency(us) 00:17:19.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.329 =================================================================================================================== 00:17:19.329 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76121' 00:17:19.329 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76121 00:17:19.329 [2024-07-14 21:16:30.778312] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 21:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76121 00:17:19.329 scheduled for removal in v24.09 hit 1 times 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76155 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76155 /var/tmp/bdevperf.sock 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76155 ']' 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.266 21:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.266 [2024-07-14 21:16:31.771720] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:20.266 [2024-07-14 21:16:31.771916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76155 ] 00:17:20.525 [2024-07-14 21:16:31.942234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.784 [2024-07-14 21:16:32.102757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.784 [2024-07-14 21:16:32.258506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:21.351 [2024-07-14 21:16:32.862661] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.351 [2024-07-14 21:16:32.864344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:17:21.351 [2024-07-14 21:16:32.865329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.351 [2024-07-14 21:16:32.865377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.351 [2024-07-14 21:16:32.865410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.351 request: 00:17:21.351 { 00:17:21.351 "name": "TLSTEST", 00:17:21.351 "trtype": "tcp", 00:17:21.351 "traddr": "10.0.0.2", 00:17:21.351 "adrfam": "ipv4", 00:17:21.351 "trsvcid": "4420", 00:17:21.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.351 "prchk_reftag": false, 00:17:21.351 "prchk_guard": false, 00:17:21.351 "hdgst": false, 00:17:21.351 "ddgst": false, 00:17:21.351 "method": "bdev_nvme_attach_controller", 00:17:21.351 "req_id": 1 00:17:21.351 } 00:17:21.351 Got JSON-RPC error response 00:17:21.351 response: 00:17:21.351 { 00:17:21.351 "code": -5, 00:17:21.351 "message": "Input/output error" 00:17:21.351 } 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76155 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76155 ']' 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76155 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.351 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76155 00:17:21.609 killing process with pid 76155 00:17:21.609 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.609 00:17:21.609 Latency(us) 00:17:21.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.609 =================================================================================================================== 00:17:21.609 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.609 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:21.609 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:21.609 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76155' 00:17:21.609 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76155 00:17:21.609 21:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76155 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 75668 00:17:22.543 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75668 ']' 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75668 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75668 00:17:22.544 killing process with pid 75668 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75668' 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75668 00:17:22.544 [2024-07-14 21:16:33.892619] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:22.544 21:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75668 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ezoJopq28O 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ezoJopq28O 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76211 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76211 00:17:23.919 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76211 ']' 00:17:23.920 21:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:23.920 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.920 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.920 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.920 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.920 21:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.920 [2024-07-14 21:16:35.240166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:23.920 [2024-07-14 21:16:35.240342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.920 [2024-07-14 21:16:35.414437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.178 [2024-07-14 21:16:35.587270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.178 [2024-07-14 21:16:35.587369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.178 [2024-07-14 21:16:35.587385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.178 [2024-07-14 21:16:35.587398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.178 [2024-07-14 21:16:35.587408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.178 [2024-07-14 21:16:35.587445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.436 [2024-07-14 21:16:35.752466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:24.694 21:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.694 21:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:24.694 21:16:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.694 21:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.694 21:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.695 21:16:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.695 21:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ezoJopq28O 00:17:24.695 21:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ezoJopq28O 00:17:24.695 21:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:24.953 [2024-07-14 21:16:36.356213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.953 21:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:25.217 21:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:25.475 [2024-07-14 21:16:36.796386] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:25.475 [2024-07-14 21:16:36.796707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.475 21:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:25.734 malloc0 00:17:25.734 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:25.734 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:17:26.003 [2024-07-14 21:16:37.474836] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:26.003 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezoJopq28O 00:17:26.003 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ezoJopq28O' 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76266 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76266 /var/tmp/bdevperf.sock 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76266 ']' 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.004 21:16:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.285 [2024-07-14 21:16:37.576164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:26.285 [2024-07-14 21:16:37.576313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76266 ] 00:17:26.285 [2024-07-14 21:16:37.740393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.568 [2024-07-14 21:16:37.969971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.827 [2024-07-14 21:16:38.142271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:27.085 21:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.085 21:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:27.085 21:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:17:27.343 [2024-07-14 21:16:38.730568] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.343 [2024-07-14 21:16:38.730793] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:27.343 TLSTESTn1 00:17:27.343 21:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:27.602 Running I/O for 10 seconds... 00:17:37.583 00:17:37.583 Latency(us) 00:17:37.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.583 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:37.583 Verification LBA range: start 0x0 length 0x2000 00:17:37.583 TLSTESTn1 : 10.04 2868.94 11.21 0.00 0.00 44521.62 8519.68 26810.18 00:17:37.583 =================================================================================================================== 00:17:37.583 Total : 2868.94 11.21 0.00 0.00 44521.62 8519.68 26810.18 00:17:37.583 0 00:17:37.583 21:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.583 21:16:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 76266 00:17:37.583 21:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76266 ']' 00:17:37.583 21:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76266 00:17:37.583 21:16:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76266 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:37.583 killing process with pid 76266 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76266' 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76266 00:17:37.583 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.583 00:17:37.583 Latency(us) 00:17:37.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.583 =================================================================================================================== 00:17:37.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.583 [2024-07-14 21:16:49.022664] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:37.583 21:16:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76266 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ezoJopq28O 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezoJopq28O 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezoJopq28O 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ezoJopq28O 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ezoJopq28O' 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76402 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76402 /var/tmp/bdevperf.sock 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76402 ']' 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.518 21:16:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.777 [2024-07-14 21:16:50.153747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:38.777 [2024-07-14 21:16:50.153970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76402 ] 00:17:38.777 [2024-07-14 21:16:50.319932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.036 [2024-07-14 21:16:50.501168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.295 [2024-07-14 21:16:50.673499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:39.554 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.554 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:39.554 21:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:17:39.814 [2024-07-14 21:16:51.258567] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.814 [2024-07-14 21:16:51.258703] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:39.814 [2024-07-14 21:16:51.258720] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ezoJopq28O 00:17:39.814 request: 00:17:39.814 { 00:17:39.814 "name": "TLSTEST", 00:17:39.814 "trtype": "tcp", 00:17:39.814 "traddr": "10.0.0.2", 00:17:39.814 "adrfam": "ipv4", 00:17:39.814 "trsvcid": "4420", 00:17:39.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.814 "prchk_reftag": false, 00:17:39.814 "prchk_guard": false, 00:17:39.814 "hdgst": false, 00:17:39.814 "ddgst": false, 00:17:39.814 "psk": "/tmp/tmp.ezoJopq28O", 00:17:39.814 "method": "bdev_nvme_attach_controller", 00:17:39.814 "req_id": 1 00:17:39.814 } 00:17:39.814 Got JSON-RPC error response 00:17:39.814 response: 00:17:39.814 { 00:17:39.814 "code": -1, 00:17:39.814 "message": "Operation not permitted" 00:17:39.814 } 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76402 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76402 ']' 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76402 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76402 00:17:39.814 killing process with pid 76402 00:17:39.814 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.814 00:17:39.814 Latency(us) 00:17:39.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.814 =================================================================================================================== 00:17:39.814 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76402' 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76402 00:17:39.814 21:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76402 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 76211 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76211 ']' 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76211 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76211 00:17:41.194 killing process with pid 76211 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76211' 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76211 00:17:41.194 [2024-07-14 21:16:52.384753] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:41.194 21:16:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76211 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76453 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76453 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76453 ']' 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.131 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.132 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.132 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.132 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.132 21:16:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.132 [2024-07-14 21:16:53.663814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:42.132 [2024-07-14 21:16:53.663991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.390 [2024-07-14 21:16:53.835484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.648 [2024-07-14 21:16:54.013430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.648 [2024-07-14 21:16:54.013511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.648 [2024-07-14 21:16:54.013545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.648 [2024-07-14 21:16:54.013558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.648 [2024-07-14 21:16:54.013569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.648 [2024-07-14 21:16:54.013622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.648 [2024-07-14 21:16:54.180096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ezoJopq28O 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ezoJopq28O 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ezoJopq28O 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ezoJopq28O 00:17:43.215 21:16:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.473 [2024-07-14 21:16:54.814019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.473 21:16:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.732 21:16:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.990 [2024-07-14 21:16:55.314168] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.990 [2024-07-14 21:16:55.314487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.990 21:16:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:44.247 malloc0 00:17:44.247 21:16:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.505 21:16:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:17:44.762 [2024-07-14 21:16:56.100542] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:44.762 [2024-07-14 21:16:56.100642] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:44.762 [2024-07-14 21:16:56.100678] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:44.762 request: 00:17:44.762 { 00:17:44.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.762 "host": "nqn.2016-06.io.spdk:host1", 00:17:44.762 "psk": "/tmp/tmp.ezoJopq28O", 00:17:44.762 "method": "nvmf_subsystem_add_host", 00:17:44.762 "req_id": 1 00:17:44.762 } 00:17:44.762 Got JSON-RPC error response 00:17:44.762 response: 00:17:44.762 { 00:17:44.762 "code": -32603, 00:17:44.762 "message": "Internal error" 00:17:44.762 } 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 76453 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76453 ']' 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76453 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76453 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:44.762 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:44.763 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76453' 00:17:44.763 killing process with pid 76453 00:17:44.763 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76453 00:17:44.763 21:16:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76453 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ezoJopq28O 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76528 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76528 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76528 ']' 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.140 21:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.140 [2024-07-14 21:16:57.391654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:46.140 [2024-07-14 21:16:57.391869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.140 [2024-07-14 21:16:57.565046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.398 [2024-07-14 21:16:57.741230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.398 [2024-07-14 21:16:57.741306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.398 [2024-07-14 21:16:57.741322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.398 [2024-07-14 21:16:57.741334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.398 [2024-07-14 21:16:57.741345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.398 [2024-07-14 21:16:57.741383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.398 [2024-07-14 21:16:57.917104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ezoJopq28O 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ezoJopq28O 00:17:46.964 21:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:47.223 [2024-07-14 21:16:58.570908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.223 21:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:47.482 21:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:47.741 [2024-07-14 21:16:59.079070] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.741 [2024-07-14 21:16:59.079419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.741 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.001 malloc0 00:17:48.001 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:17:48.259 [2024-07-14 21:16:59.760282] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=76578 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 76578 /var/tmp/bdevperf.sock 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76578 ']' 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.259 21:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.518 [2024-07-14 21:16:59.894300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:48.518 [2024-07-14 21:16:59.894537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76578 ] 00:17:48.776 [2024-07-14 21:17:00.069302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.776 [2024-07-14 21:17:00.243560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.035 [2024-07-14 21:17:00.416589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:49.294 21:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.294 21:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.294 21:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:17:49.553 [2024-07-14 21:17:00.992615] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.553 [2024-07-14 21:17:00.992812] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:49.553 TLSTESTn1 00:17:49.812 21:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:50.071 21:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:50.071 "subsystems": [ 00:17:50.071 { 00:17:50.071 "subsystem": "keyring", 00:17:50.071 "config": [] 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "subsystem": "iobuf", 00:17:50.071 "config": [ 00:17:50.071 { 00:17:50.071 "method": "iobuf_set_options", 00:17:50.071 "params": { 00:17:50.071 "small_pool_count": 8192, 00:17:50.071 "large_pool_count": 1024, 00:17:50.071 "small_bufsize": 8192, 00:17:50.071 "large_bufsize": 135168 00:17:50.071 } 00:17:50.071 } 00:17:50.071 ] 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "subsystem": "sock", 00:17:50.071 "config": [ 00:17:50.071 { 00:17:50.071 "method": "sock_set_default_impl", 00:17:50.071 "params": { 00:17:50.071 "impl_name": "uring" 00:17:50.071 } 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "method": "sock_impl_set_options", 00:17:50.071 "params": { 00:17:50.071 "impl_name": "ssl", 00:17:50.071 "recv_buf_size": 4096, 00:17:50.071 "send_buf_size": 4096, 00:17:50.071 "enable_recv_pipe": true, 00:17:50.071 "enable_quickack": false, 00:17:50.071 "enable_placement_id": 0, 00:17:50.071 "enable_zerocopy_send_server": true, 00:17:50.071 "enable_zerocopy_send_client": false, 00:17:50.071 "zerocopy_threshold": 0, 00:17:50.071 "tls_version": 0, 00:17:50.071 "enable_ktls": false 00:17:50.071 } 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "method": "sock_impl_set_options", 00:17:50.071 "params": { 00:17:50.071 "impl_name": "posix", 00:17:50.071 "recv_buf_size": 2097152, 00:17:50.071 "send_buf_size": 2097152, 00:17:50.071 "enable_recv_pipe": true, 00:17:50.071 "enable_quickack": false, 00:17:50.071 "enable_placement_id": 0, 00:17:50.071 "enable_zerocopy_send_server": true, 00:17:50.071 "enable_zerocopy_send_client": false, 00:17:50.071 "zerocopy_threshold": 0, 00:17:50.071 "tls_version": 0, 00:17:50.071 "enable_ktls": false 00:17:50.071 } 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "method": "sock_impl_set_options", 00:17:50.071 "params": { 00:17:50.071 "impl_name": "uring", 00:17:50.071 "recv_buf_size": 2097152, 00:17:50.071 "send_buf_size": 2097152, 00:17:50.071 "enable_recv_pipe": true, 00:17:50.071 "enable_quickack": false, 00:17:50.071 "enable_placement_id": 0, 00:17:50.071 "enable_zerocopy_send_server": false, 00:17:50.071 "enable_zerocopy_send_client": false, 00:17:50.071 "zerocopy_threshold": 0, 00:17:50.071 "tls_version": 0, 00:17:50.071 "enable_ktls": false 00:17:50.071 } 00:17:50.071 } 00:17:50.071 ] 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "subsystem": "vmd", 00:17:50.071 "config": [] 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "subsystem": "accel", 00:17:50.071 "config": [ 00:17:50.071 { 00:17:50.071 "method": "accel_set_options", 00:17:50.071 "params": { 00:17:50.071 "small_cache_size": 128, 00:17:50.071 "large_cache_size": 16, 00:17:50.071 "task_count": 2048, 00:17:50.071 "sequence_count": 2048, 00:17:50.071 "buf_count": 2048 00:17:50.071 } 00:17:50.071 } 00:17:50.071 ] 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "subsystem": "bdev", 00:17:50.071 "config": [ 00:17:50.071 { 00:17:50.071 "method": "bdev_set_options", 00:17:50.071 "params": { 00:17:50.071 "bdev_io_pool_size": 65535, 00:17:50.071 "bdev_io_cache_size": 256, 00:17:50.071 "bdev_auto_examine": true, 00:17:50.071 "iobuf_small_cache_size": 128, 00:17:50.071 "iobuf_large_cache_size": 16 00:17:50.071 } 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "method": "bdev_raid_set_options", 00:17:50.071 "params": { 00:17:50.071 "process_window_size_kb": 1024 00:17:50.071 } 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "method": "bdev_iscsi_set_options", 00:17:50.071 "params": { 00:17:50.071 "timeout_sec": 30 00:17:50.071 } 00:17:50.071 }, 00:17:50.071 { 00:17:50.071 "method": "bdev_nvme_set_options", 00:17:50.071 "params": { 00:17:50.071 "action_on_timeout": "none", 00:17:50.071 "timeout_us": 0, 00:17:50.071 "timeout_admin_us": 0, 00:17:50.071 "keep_alive_timeout_ms": 10000, 00:17:50.071 "arbitration_burst": 0, 00:17:50.071 "low_priority_weight": 0, 00:17:50.071 "medium_priority_weight": 0, 00:17:50.071 "high_priority_weight": 0, 00:17:50.071 "nvme_adminq_poll_period_us": 10000, 00:17:50.071 "nvme_ioq_poll_period_us": 0, 00:17:50.071 "io_queue_requests": 0, 00:17:50.071 "delay_cmd_submit": true, 00:17:50.071 "transport_retry_count": 4, 00:17:50.071 "bdev_retry_count": 3, 00:17:50.071 "transport_ack_timeout": 0, 00:17:50.072 "ctrlr_loss_timeout_sec": 0, 00:17:50.072 "reconnect_delay_sec": 0, 00:17:50.072 "fast_io_fail_timeout_sec": 0, 00:17:50.072 "disable_auto_failback": false, 00:17:50.072 "generate_uuids": false, 00:17:50.072 "transport_tos": 0, 00:17:50.072 "nvme_error_stat": false, 00:17:50.072 "rdma_srq_size": 0, 00:17:50.072 "io_path_stat": false, 00:17:50.072 "allow_accel_sequence": false, 00:17:50.072 "rdma_max_cq_size": 0, 00:17:50.072 "rdma_cm_event_timeout_ms": 0, 00:17:50.072 "dhchap_digests": [ 00:17:50.072 "sha256", 00:17:50.072 "sha384", 00:17:50.072 "sha512" 00:17:50.072 ], 00:17:50.072 "dhchap_dhgroups": [ 00:17:50.072 "null", 00:17:50.072 "ffdhe2048", 00:17:50.072 "ffdhe3072", 00:17:50.072 "ffdhe4096", 00:17:50.072 "ffdhe6144", 00:17:50.072 "ffdhe8192" 00:17:50.072 ] 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "bdev_nvme_set_hotplug", 00:17:50.072 "params": { 00:17:50.072 "period_us": 100000, 00:17:50.072 "enable": false 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "bdev_malloc_create", 00:17:50.072 "params": { 00:17:50.072 "name": "malloc0", 00:17:50.072 "num_blocks": 8192, 00:17:50.072 "block_size": 4096, 00:17:50.072 "physical_block_size": 4096, 00:17:50.072 "uuid": "04019a92-40c4-4f10-80bc-daaabffdfc3e", 00:17:50.072 "optimal_io_boundary": 0 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "bdev_wait_for_examine" 00:17:50.072 } 00:17:50.072 ] 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "subsystem": "nbd", 00:17:50.072 "config": [] 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "subsystem": "scheduler", 00:17:50.072 "config": [ 00:17:50.072 { 00:17:50.072 "method": "framework_set_scheduler", 00:17:50.072 "params": { 00:17:50.072 "name": "static" 00:17:50.072 } 00:17:50.072 } 00:17:50.072 ] 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "subsystem": "nvmf", 00:17:50.072 "config": [ 00:17:50.072 { 00:17:50.072 "method": "nvmf_set_config", 00:17:50.072 "params": { 00:17:50.072 "discovery_filter": "match_any", 00:17:50.072 "admin_cmd_passthru": { 00:17:50.072 "identify_ctrlr": false 00:17:50.072 } 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_set_max_subsystems", 00:17:50.072 "params": { 00:17:50.072 "max_subsystems": 1024 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_set_crdt", 00:17:50.072 "params": { 00:17:50.072 "crdt1": 0, 00:17:50.072 "crdt2": 0, 00:17:50.072 "crdt3": 0 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_create_transport", 00:17:50.072 "params": { 00:17:50.072 "trtype": "TCP", 00:17:50.072 "max_queue_depth": 128, 00:17:50.072 "max_io_qpairs_per_ctrlr": 127, 00:17:50.072 "in_capsule_data_size": 4096, 00:17:50.072 "max_io_size": 131072, 00:17:50.072 "io_unit_size": 131072, 00:17:50.072 "max_aq_depth": 128, 00:17:50.072 "num_shared_buffers": 511, 00:17:50.072 "buf_cache_size": 4294967295, 00:17:50.072 "dif_insert_or_strip": false, 00:17:50.072 "zcopy": false, 00:17:50.072 "c2h_success": false, 00:17:50.072 "sock_priority": 0, 00:17:50.072 "abort_timeout_sec": 1, 00:17:50.072 "ack_timeout": 0, 00:17:50.072 "data_wr_pool_size": 0 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_create_subsystem", 00:17:50.072 "params": { 00:17:50.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.072 "allow_any_host": false, 00:17:50.072 "serial_number": "SPDK00000000000001", 00:17:50.072 "model_number": "SPDK bdev Controller", 00:17:50.072 "max_namespaces": 10, 00:17:50.072 "min_cntlid": 1, 00:17:50.072 "max_cntlid": 65519, 00:17:50.072 "ana_reporting": false 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_subsystem_add_host", 00:17:50.072 "params": { 00:17:50.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.072 "host": "nqn.2016-06.io.spdk:host1", 00:17:50.072 "psk": "/tmp/tmp.ezoJopq28O" 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_subsystem_add_ns", 00:17:50.072 "params": { 00:17:50.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.072 "namespace": { 00:17:50.072 "nsid": 1, 00:17:50.072 "bdev_name": "malloc0", 00:17:50.072 "nguid": "04019A9240C44F1080BCDAAABFFDFC3E", 00:17:50.072 "uuid": "04019a92-40c4-4f10-80bc-daaabffdfc3e", 00:17:50.072 "no_auto_visible": false 00:17:50.072 } 00:17:50.072 } 00:17:50.072 }, 00:17:50.072 { 00:17:50.072 "method": "nvmf_subsystem_add_listener", 00:17:50.072 "params": { 00:17:50.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.072 "listen_address": { 00:17:50.072 "trtype": "TCP", 00:17:50.072 "adrfam": "IPv4", 00:17:50.072 "traddr": "10.0.0.2", 00:17:50.072 "trsvcid": "4420" 00:17:50.072 }, 00:17:50.072 "secure_channel": true 00:17:50.072 } 00:17:50.072 } 00:17:50.072 ] 00:17:50.072 } 00:17:50.072 ] 00:17:50.072 }' 00:17:50.072 21:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:50.332 21:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:50.332 "subsystems": [ 00:17:50.332 { 00:17:50.332 "subsystem": "keyring", 00:17:50.332 "config": [] 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "subsystem": "iobuf", 00:17:50.332 "config": [ 00:17:50.332 { 00:17:50.332 "method": "iobuf_set_options", 00:17:50.332 "params": { 00:17:50.332 "small_pool_count": 8192, 00:17:50.332 "large_pool_count": 1024, 00:17:50.332 "small_bufsize": 8192, 00:17:50.332 "large_bufsize": 135168 00:17:50.332 } 00:17:50.332 } 00:17:50.332 ] 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "subsystem": "sock", 00:17:50.332 "config": [ 00:17:50.332 { 00:17:50.332 "method": "sock_set_default_impl", 00:17:50.332 "params": { 00:17:50.332 "impl_name": "uring" 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "sock_impl_set_options", 00:17:50.332 "params": { 00:17:50.332 "impl_name": "ssl", 00:17:50.332 "recv_buf_size": 4096, 00:17:50.332 "send_buf_size": 4096, 00:17:50.332 "enable_recv_pipe": true, 00:17:50.332 "enable_quickack": false, 00:17:50.332 "enable_placement_id": 0, 00:17:50.332 "enable_zerocopy_send_server": true, 00:17:50.332 "enable_zerocopy_send_client": false, 00:17:50.332 "zerocopy_threshold": 0, 00:17:50.332 "tls_version": 0, 00:17:50.332 "enable_ktls": false 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "sock_impl_set_options", 00:17:50.332 "params": { 00:17:50.332 "impl_name": "posix", 00:17:50.332 "recv_buf_size": 2097152, 00:17:50.332 "send_buf_size": 2097152, 00:17:50.332 "enable_recv_pipe": true, 00:17:50.332 "enable_quickack": false, 00:17:50.332 "enable_placement_id": 0, 00:17:50.332 "enable_zerocopy_send_server": true, 00:17:50.332 "enable_zerocopy_send_client": false, 00:17:50.332 "zerocopy_threshold": 0, 00:17:50.332 "tls_version": 0, 00:17:50.332 "enable_ktls": false 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "sock_impl_set_options", 00:17:50.332 "params": { 00:17:50.332 "impl_name": "uring", 00:17:50.332 "recv_buf_size": 2097152, 00:17:50.332 "send_buf_size": 2097152, 00:17:50.332 "enable_recv_pipe": true, 00:17:50.332 "enable_quickack": false, 00:17:50.332 "enable_placement_id": 0, 00:17:50.332 "enable_zerocopy_send_server": false, 00:17:50.332 "enable_zerocopy_send_client": false, 00:17:50.332 "zerocopy_threshold": 0, 00:17:50.332 "tls_version": 0, 00:17:50.332 "enable_ktls": false 00:17:50.332 } 00:17:50.332 } 00:17:50.332 ] 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "subsystem": "vmd", 00:17:50.332 "config": [] 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "subsystem": "accel", 00:17:50.332 "config": [ 00:17:50.332 { 00:17:50.332 "method": "accel_set_options", 00:17:50.332 "params": { 00:17:50.332 "small_cache_size": 128, 00:17:50.332 "large_cache_size": 16, 00:17:50.332 "task_count": 2048, 00:17:50.332 "sequence_count": 2048, 00:17:50.332 "buf_count": 2048 00:17:50.332 } 00:17:50.332 } 00:17:50.332 ] 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "subsystem": "bdev", 00:17:50.332 "config": [ 00:17:50.332 { 00:17:50.332 "method": "bdev_set_options", 00:17:50.332 "params": { 00:17:50.332 "bdev_io_pool_size": 65535, 00:17:50.332 "bdev_io_cache_size": 256, 00:17:50.332 "bdev_auto_examine": true, 00:17:50.332 "iobuf_small_cache_size": 128, 00:17:50.332 "iobuf_large_cache_size": 16 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "bdev_raid_set_options", 00:17:50.332 "params": { 00:17:50.332 "process_window_size_kb": 1024 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "bdev_iscsi_set_options", 00:17:50.332 "params": { 00:17:50.332 "timeout_sec": 30 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "bdev_nvme_set_options", 00:17:50.332 "params": { 00:17:50.332 "action_on_timeout": "none", 00:17:50.332 "timeout_us": 0, 00:17:50.332 "timeout_admin_us": 0, 00:17:50.332 "keep_alive_timeout_ms": 10000, 00:17:50.332 "arbitration_burst": 0, 00:17:50.332 "low_priority_weight": 0, 00:17:50.332 "medium_priority_weight": 0, 00:17:50.332 "high_priority_weight": 0, 00:17:50.332 "nvme_adminq_poll_period_us": 10000, 00:17:50.332 "nvme_ioq_poll_period_us": 0, 00:17:50.332 "io_queue_requests": 512, 00:17:50.332 "delay_cmd_submit": true, 00:17:50.332 "transport_retry_count": 4, 00:17:50.332 "bdev_retry_count": 3, 00:17:50.332 "transport_ack_timeout": 0, 00:17:50.332 "ctrlr_loss_timeout_sec": 0, 00:17:50.332 "reconnect_delay_sec": 0, 00:17:50.332 "fast_io_fail_timeout_sec": 0, 00:17:50.332 "disable_auto_failback": false, 00:17:50.332 "generate_uuids": false, 00:17:50.332 "transport_tos": 0, 00:17:50.332 "nvme_error_stat": false, 00:17:50.332 "rdma_srq_size": 0, 00:17:50.332 "io_path_stat": false, 00:17:50.332 "allow_accel_sequence": false, 00:17:50.332 "rdma_max_cq_size": 0, 00:17:50.332 "rdma_cm_event_timeout_ms": 0, 00:17:50.332 "dhchap_digests": [ 00:17:50.332 "sha256", 00:17:50.332 "sha384", 00:17:50.332 "sha512" 00:17:50.332 ], 00:17:50.332 "dhchap_dhgroups": [ 00:17:50.332 "null", 00:17:50.332 "ffdhe2048", 00:17:50.332 "ffdhe3072", 00:17:50.332 "ffdhe4096", 00:17:50.332 "ffdhe6144", 00:17:50.332 "ffdhe8192" 00:17:50.332 ] 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "bdev_nvme_attach_controller", 00:17:50.332 "params": { 00:17:50.332 "name": "TLSTEST", 00:17:50.332 "trtype": "TCP", 00:17:50.332 "adrfam": "IPv4", 00:17:50.332 "traddr": "10.0.0.2", 00:17:50.332 "trsvcid": "4420", 00:17:50.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.332 "prchk_reftag": false, 00:17:50.332 "prchk_guard": false, 00:17:50.332 "ctrlr_loss_timeout_sec": 0, 00:17:50.332 "reconnect_delay_sec": 0, 00:17:50.332 "fast_io_fail_timeout_sec": 0, 00:17:50.332 "psk": "/tmp/tmp.ezoJopq28O", 00:17:50.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.332 "hdgst": false, 00:17:50.332 "ddgst": false 00:17:50.332 } 00:17:50.332 }, 00:17:50.332 { 00:17:50.332 "method": "bdev_nvme_set_hotplug", 00:17:50.332 "params": { 00:17:50.333 "period_us": 100000, 00:17:50.333 "enable": false 00:17:50.333 } 00:17:50.333 }, 00:17:50.333 { 00:17:50.333 "method": "bdev_wait_for_examine" 00:17:50.333 } 00:17:50.333 ] 00:17:50.333 }, 00:17:50.333 { 00:17:50.333 "subsystem": "nbd", 00:17:50.333 "config": [] 00:17:50.333 } 00:17:50.333 ] 00:17:50.333 }' 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 76578 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76578 ']' 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76578 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76578 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:50.333 killing process with pid 76578 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76578' 00:17:50.333 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.333 00:17:50.333 Latency(us) 00:17:50.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.333 =================================================================================================================== 00:17:50.333 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76578 00:17:50.333 [2024-07-14 21:17:01.795794] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:50.333 21:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76578 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 76528 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76528 ']' 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76528 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76528 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.270 killing process with pid 76528 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76528' 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76528 00:17:51.270 [2024-07-14 21:17:02.797596] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:51.270 21:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76528 00:17:52.646 21:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:52.646 21:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:52.646 "subsystems": [ 00:17:52.646 { 00:17:52.646 "subsystem": "keyring", 00:17:52.646 "config": [] 00:17:52.646 }, 00:17:52.646 { 00:17:52.646 "subsystem": "iobuf", 00:17:52.646 "config": [ 00:17:52.646 { 00:17:52.646 "method": "iobuf_set_options", 00:17:52.646 "params": { 00:17:52.646 "small_pool_count": 8192, 00:17:52.646 "large_pool_count": 1024, 00:17:52.646 "small_bufsize": 8192, 00:17:52.646 "large_bufsize": 135168 00:17:52.646 } 00:17:52.646 } 00:17:52.646 ] 00:17:52.646 }, 00:17:52.646 { 00:17:52.646 "subsystem": "sock", 00:17:52.646 "config": [ 00:17:52.646 { 00:17:52.646 "method": "sock_set_default_impl", 00:17:52.646 "params": { 00:17:52.646 "impl_name": "uring" 00:17:52.646 } 00:17:52.646 }, 00:17:52.646 { 00:17:52.646 "method": "sock_impl_set_options", 00:17:52.646 "params": { 00:17:52.646 "impl_name": "ssl", 00:17:52.646 "recv_buf_size": 4096, 00:17:52.646 "send_buf_size": 4096, 00:17:52.646 "enable_recv_pipe": true, 00:17:52.646 "enable_quickack": false, 00:17:52.646 "enable_placement_id": 0, 00:17:52.646 "enable_zerocopy_send_server": true, 00:17:52.646 "enable_zerocopy_send_client": false, 00:17:52.646 "zerocopy_threshold": 0, 00:17:52.646 "tls_version": 0, 00:17:52.646 "enable_ktls": false 00:17:52.646 } 00:17:52.646 }, 00:17:52.646 { 00:17:52.646 "method": "sock_impl_set_options", 00:17:52.646 "params": { 00:17:52.646 "impl_name": "posix", 00:17:52.646 "recv_buf_size": 2097152, 00:17:52.646 "send_buf_size": 2097152, 00:17:52.646 "enable_recv_pipe": true, 00:17:52.646 "enable_quickack": false, 00:17:52.646 "enable_placement_id": 0, 00:17:52.646 "enable_zerocopy_send_server": true, 00:17:52.646 "enable_zerocopy_send_client": false, 00:17:52.646 "zerocopy_threshold": 0, 00:17:52.646 "tls_version": 0, 00:17:52.646 "enable_ktls": false 00:17:52.646 } 00:17:52.646 }, 00:17:52.646 { 00:17:52.647 "method": "sock_impl_set_options", 00:17:52.647 "params": { 00:17:52.647 "impl_name": "uring", 00:17:52.647 "recv_buf_size": 2097152, 00:17:52.647 "send_buf_size": 2097152, 00:17:52.647 "enable_recv_pipe": true, 00:17:52.647 "enable_quickack": false, 00:17:52.647 "enable_placement_id": 0, 00:17:52.647 "enable_zerocopy_send_server": false, 00:17:52.647 "enable_zerocopy_send_client": false, 00:17:52.647 "zerocopy_threshold": 0, 00:17:52.647 "tls_version": 0, 00:17:52.647 "enable_ktls": false 00:17:52.647 } 00:17:52.647 } 00:17:52.647 ] 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "subsystem": "vmd", 00:17:52.647 "config": [] 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "subsystem": "accel", 00:17:52.647 "config": [ 00:17:52.647 { 00:17:52.647 "method": "accel_set_options", 00:17:52.647 "params": { 00:17:52.647 "small_cache_size": 128, 00:17:52.647 "large_cache_size": 16, 00:17:52.647 "task_count": 2048, 00:17:52.647 "sequence_count": 2048, 00:17:52.647 "buf_count": 2048 00:17:52.647 } 00:17:52.647 } 00:17:52.647 ] 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "subsystem": "bdev", 00:17:52.647 "config": [ 00:17:52.647 { 00:17:52.647 "method": "bdev_set_options", 00:17:52.647 "params": { 00:17:52.647 "bdev_io_pool_size": 65535, 00:17:52.647 "bdev_io_cache_size": 256, 00:17:52.647 "bdev_auto_examine": true, 00:17:52.647 "iobuf_small_cache_size": 128, 00:17:52.647 "iobuf_large_cache_size": 16 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "bdev_raid_set_options", 00:17:52.647 "params": { 00:17:52.647 "process_window_size_kb": 1024 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "bdev_iscsi_set_options", 00:17:52.647 "params": { 00:17:52.647 "timeout_sec": 30 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "bdev_nvme_set_options", 00:17:52.647 "params": { 00:17:52.647 "action_on_timeout": "none", 00:17:52.647 "timeout_us": 0, 00:17:52.647 "timeout_admin_us": 0, 00:17:52.647 "keep_alive_timeout_ms": 10000, 00:17:52.647 "arbitration_burst": 0, 00:17:52.647 "low_priority_weight": 0, 00:17:52.647 "medium_priority_weight": 0, 00:17:52.647 "high_priority_weight": 0, 00:17:52.647 "nvme_adminq_poll_period_us": 10000, 00:17:52.647 "nvme_ioq_poll_period_us": 0, 00:17:52.647 "io_queue_requests": 0, 00:17:52.647 "delay_cmd_submit": true, 00:17:52.647 "transport_retry_count": 4, 00:17:52.647 "bdev_retry_count": 3, 00:17:52.647 "transport_ack_timeout": 0, 00:17:52.647 "ctrlr_loss_timeout_sec": 0, 00:17:52.647 "reconnect_delay_sec": 0, 00:17:52.647 "fast_io_fail_timeout_sec": 0, 00:17:52.647 "disable_auto_failback": false, 00:17:52.647 "generate_uuids": false, 00:17:52.647 "transport_tos": 0, 00:17:52.647 "nvme_error_stat": false, 00:17:52.647 "rdma_srq_size": 0, 00:17:52.647 "io_path_stat": false, 00:17:52.647 "allow_accel_sequence": false, 00:17:52.647 "rdma_max_cq_size": 0, 00:17:52.647 "rdma_cm_event_timeout_ms": 0, 00:17:52.647 "dhchap_digests": [ 00:17:52.647 "sha256", 00:17:52.647 "sha384", 00:17:52.647 "sha512" 00:17:52.647 ], 00:17:52.647 "dhchap_dhgroups": [ 00:17:52.647 "null", 00:17:52.647 "ffdhe2048", 00:17:52.647 "ffdhe3072", 00:17:52.647 "ffdhe4096", 00:17:52.647 "ffdhe6144", 00:17:52.647 "ffdhe8192" 00:17:52.647 ] 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "bdev_nvme_set_hotplug", 00:17:52.647 "params": { 00:17:52.647 "period_us": 100000, 00:17:52.647 "enable": false 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "bdev_malloc_create", 00:17:52.647 "params": { 00:17:52.647 "name": "malloc0", 00:17:52.647 "num_blocks": 8192, 00:17:52.647 "block_size": 4096, 00:17:52.647 "physical_block_size": 4096, 00:17:52.647 "uuid": "04019a92-40c4-4f10-80bc-daaabffdfc3e", 00:17:52.647 "optimal_io_boundary": 0 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "bdev_wait_for_examine" 00:17:52.647 } 00:17:52.647 ] 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "subsystem": "nbd", 00:17:52.647 "config": [] 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "subsystem": "scheduler", 00:17:52.647 "config": [ 00:17:52.647 { 00:17:52.647 "method": "framework_set_scheduler", 00:17:52.647 "params": { 00:17:52.647 "name": "static" 00:17:52.647 } 00:17:52.647 } 00:17:52.647 ] 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "subsystem": "nvmf", 00:17:52.647 "config": [ 00:17:52.647 { 00:17:52.647 "method": "nvmf_set_config", 00:17:52.647 "params": { 00:17:52.647 "discovery_filter": "match_any", 00:17:52.647 "admin_cmd_passthru": { 00:17:52.647 "identify_ctrlr": false 00:17:52.647 } 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_set_max_subsystems", 00:17:52.647 "params": { 00:17:52.647 "max_subsystems": 1024 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_set_crdt", 00:17:52.647 "params": { 00:17:52.647 "crdt1": 0, 00:17:52.647 "crdt2": 0, 00:17:52.647 "crdt3": 0 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_create_transport", 00:17:52.647 "params": { 00:17:52.647 "trtype": "TCP", 00:17:52.647 "max_queue_depth": 128, 00:17:52.647 "max_io_qpairs_per_ctrlr": 127, 00:17:52.647 "in_capsule_data_size": 4096, 00:17:52.647 "max_io_size": 131072, 00:17:52.647 "io_unit_size": 131072, 00:17:52.647 "max_aq_depth": 128, 00:17:52.647 "num_shared_buffers": 511, 00:17:52.647 "buf_cache_size": 4294967295, 00:17:52.647 "dif_insert_or_strip": false, 00:17:52.647 "zcopy": false, 00:17:52.647 "c2h_success": false, 00:17:52.647 "sock_priority": 0, 00:17:52.647 "abort_timeout_sec": 1, 00:17:52.647 "ack_timeout": 0, 00:17:52.647 "data_wr_pool_size": 0 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_create_subsystem", 00:17:52.647 "params": { 00:17:52.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.647 "allow_any_host": false, 00:17:52.647 "serial_number": "SPDK00000000000001", 00:17:52.647 "model_number": "SPDK bdev Controller", 00:17:52.647 "max_namespaces": 10, 00:17:52.647 "min_cntlid": 1, 00:17:52.647 "max_cntlid": 65519, 00:17:52.647 "ana_reporting": false 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_subsystem_add_host", 00:17:52.647 "params": { 00:17:52.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.647 "host": "nqn.2016-06.io.spdk:host1", 00:17:52.647 "psk": "/tmp/tmp.ezoJopq28O" 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_subsystem_add_ns", 00:17:52.647 "params": { 00:17:52.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.647 "namespace": { 00:17:52.647 "nsid": 1, 00:17:52.647 "bdev_name": "malloc0", 00:17:52.647 "nguid": "04019A9240C44F1080BCDAAABFFDFC3E", 00:17:52.647 "uuid": "04019a92-40c4-4f10-80bc-daaabffdfc3e", 00:17:52.647 "no_auto_visible": false 00:17:52.647 } 00:17:52.647 } 00:17:52.647 }, 00:17:52.647 { 00:17:52.647 "method": "nvmf_subsystem_add_listener", 00:17:52.647 "params": { 00:17:52.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.647 "listen_address": { 00:17:52.647 "trtype": "TCP", 00:17:52.647 "adrfam": "IPv4", 00:17:52.647 "traddr": "10.0.0.2", 00:17:52.647 "trsvcid": "4420" 00:17:52.647 }, 00:17:52.647 "secure_channel": true 00:17:52.647 21:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.647 } 00:17:52.647 } 00:17:52.647 ] 00:17:52.647 } 00:17:52.647 ] 00:17:52.647 }' 00:17:52.647 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.647 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.647 21:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76650 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76650 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76650 ']' 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.648 21:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.648 [2024-07-14 21:17:04.046199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:52.648 [2024-07-14 21:17:04.046415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.906 [2024-07-14 21:17:04.215566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.906 [2024-07-14 21:17:04.381678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.906 [2024-07-14 21:17:04.381789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.906 [2024-07-14 21:17:04.381808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.906 [2024-07-14 21:17:04.381821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.906 [2024-07-14 21:17:04.381832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.906 [2024-07-14 21:17:04.381981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.164 [2024-07-14 21:17:04.667647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:53.423 [2024-07-14 21:17:04.819290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.423 [2024-07-14 21:17:04.835261] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:53.423 [2024-07-14 21:17:04.851227] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.423 [2024-07-14 21:17:04.858985] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=76682 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 76682 /var/tmp/bdevperf.sock 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76682 ']' 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.423 21:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:53.423 "subsystems": [ 00:17:53.423 { 00:17:53.423 "subsystem": "keyring", 00:17:53.423 "config": [] 00:17:53.423 }, 00:17:53.423 { 00:17:53.423 "subsystem": "iobuf", 00:17:53.423 "config": [ 00:17:53.423 { 00:17:53.423 "method": "iobuf_set_options", 00:17:53.423 "params": { 00:17:53.423 "small_pool_count": 8192, 00:17:53.423 "large_pool_count": 1024, 00:17:53.423 "small_bufsize": 8192, 00:17:53.423 "large_bufsize": 135168 00:17:53.423 } 00:17:53.423 } 00:17:53.423 ] 00:17:53.423 }, 00:17:53.423 { 00:17:53.423 "subsystem": "sock", 00:17:53.423 "config": [ 00:17:53.423 { 00:17:53.423 "method": "sock_set_default_impl", 00:17:53.423 "params": { 00:17:53.423 "impl_name": "uring" 00:17:53.423 } 00:17:53.423 }, 00:17:53.423 { 00:17:53.423 "method": "sock_impl_set_options", 00:17:53.423 "params": { 00:17:53.423 "impl_name": "ssl", 00:17:53.423 "recv_buf_size": 4096, 00:17:53.423 "send_buf_size": 4096, 00:17:53.423 "enable_recv_pipe": true, 00:17:53.423 "enable_quickack": false, 00:17:53.423 "enable_placement_id": 0, 00:17:53.423 "enable_zerocopy_send_server": true, 00:17:53.423 "enable_zerocopy_send_client": false, 00:17:53.423 "zerocopy_threshold": 0, 00:17:53.423 "tls_version": 0, 00:17:53.423 "enable_ktls": false 00:17:53.423 } 00:17:53.423 }, 00:17:53.423 { 00:17:53.423 "method": "sock_impl_set_options", 00:17:53.423 "params": { 00:17:53.423 "impl_name": "posix", 00:17:53.423 "recv_buf_size": 2097152, 00:17:53.423 "send_buf_size": 2097152, 00:17:53.423 "enable_recv_pipe": true, 00:17:53.423 "enable_quickack": false, 00:17:53.423 "enable_placement_id": 0, 00:17:53.423 "enable_zerocopy_send_server": true, 00:17:53.423 "enable_zerocopy_send_client": false, 00:17:53.423 "zerocopy_threshold": 0, 00:17:53.423 "tls_version": 0, 00:17:53.423 "enable_ktls": false 00:17:53.423 } 00:17:53.423 }, 00:17:53.423 { 00:17:53.423 "method": "sock_impl_set_options", 00:17:53.423 "params": { 00:17:53.423 "impl_name": "uring", 00:17:53.423 "recv_buf_size": 2097152, 00:17:53.423 "send_buf_size": 2097152, 00:17:53.423 "enable_recv_pipe": true, 00:17:53.424 "enable_quickack": false, 00:17:53.424 "enable_placement_id": 0, 00:17:53.424 "enable_zerocopy_send_server": false, 00:17:53.424 "enable_zerocopy_send_client": false, 00:17:53.424 "zerocopy_threshold": 0, 00:17:53.424 "tls_version": 0, 00:17:53.424 "enable_ktls": false 00:17:53.424 } 00:17:53.424 } 00:17:53.424 ] 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "subsystem": "vmd", 00:17:53.424 "config": [] 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "subsystem": "accel", 00:17:53.424 "config": [ 00:17:53.424 { 00:17:53.424 "method": "accel_set_options", 00:17:53.424 "params": { 00:17:53.424 "small_cache_size": 128, 00:17:53.424 "large_cache_size": 16, 00:17:53.424 "task_count": 2048, 00:17:53.424 "sequence_count": 2048, 00:17:53.424 "buf_count": 2048 00:17:53.424 } 00:17:53.424 } 00:17:53.424 ] 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "subsystem": "bdev", 00:17:53.424 "config": [ 00:17:53.424 { 00:17:53.424 "method": "bdev_set_options", 00:17:53.424 "params": { 00:17:53.424 "bdev_io_pool_size": 65535, 00:17:53.424 "bdev_io_cache_size": 256, 00:17:53.424 "bdev_auto_examine": true, 00:17:53.424 "iobuf_small_cache_size": 128, 00:17:53.424 "iobuf_large_cache_size": 16 00:17:53.424 } 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "method": "bdev_raid_set_options", 00:17:53.424 "params": { 00:17:53.424 "process_window_size_kb": 1024 00:17:53.424 } 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "method": "bdev_iscsi_set_options", 00:17:53.424 "params": { 00:17:53.424 "timeout_sec": 30 00:17:53.424 } 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "method": "bdev_nvme_set_options", 00:17:53.424 "params": { 00:17:53.424 "action_on_timeout": "none", 00:17:53.424 "timeout_us": 0, 00:17:53.424 "timeout_admin_us": 0, 00:17:53.424 "keep_alive_timeout_ms": 10000, 00:17:53.424 "arbitration_burst": 0, 00:17:53.424 "low_priority_weight": 0, 00:17:53.424 "medium_priority_weight": 0, 00:17:53.424 "high_priority_weight": 0, 00:17:53.424 "nvme_adminq_poll_period_us": 10000, 00:17:53.424 "nvme_ioq_poll_period_us": 0, 00:17:53.424 "io_queue_requests": 512, 00:17:53.424 "delay_cmd_submit": true, 00:17:53.424 "transport_retry_count": 4, 00:17:53.424 "bdev_retry_count": 3, 00:17:53.424 "transport_ack_timeout": 0, 00:17:53.424 "ctrlr_loss_timeout_sec": 0, 00:17:53.424 "reconnect_delay_sec": 0, 00:17:53.424 "fast_io_fail_timeout_sec": 0, 00:17:53.424 "disable_auto_failback": false, 00:17:53.424 "generate_uuids": false, 00:17:53.424 "transport_tos": 0, 00:17:53.424 "nvme_error_stat": false, 00:17:53.424 "rdma_srq_size": 0, 00:17:53.424 "io_path_stat": false, 00:17:53.424 "allow_accel_sequence": false, 00:17:53.424 "rdma_max_cq_size": 0, 00:17:53.424 "rdma_cm_event_timeout_ms": 0, 00:17:53.424 "dhchap_digests": [ 00:17:53.424 "sha256", 00:17:53.424 "sha384", 00:17:53.424 "sha512" 00:17:53.424 ], 00:17:53.424 "dhchap_dhgroups": [ 00:17:53.424 "null", 00:17:53.424 "ffdhe2048", 00:17:53.424 "ffdhe3072", 00:17:53.424 "ffdhe4096", 00:17:53.424 "ffdhe6144", 00:17:53.424 "ffdhe8192" 00:17:53.424 ] 00:17:53.424 } 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "method": "bdev_nvme_attach_controller", 00:17:53.424 "params": { 00:17:53.424 "name": "TLSTEST", 00:17:53.424 "trtype": "TCP", 00:17:53.424 "adrfam": "IPv4", 00:17:53.424 "traddr": "10.0.0.2", 00:17:53.424 "trsvcid": "4420", 00:17:53.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.424 "prchk_reftag": false, 00:17:53.424 "prchk_guard": false, 00:17:53.424 "ctrlr_loss_timeout_sec": 0, 00:17:53.424 "reconnect_delay_sec": 0, 00:17:53.424 "fast_io_fail_timeout_sec": 0, 00:17:53.424 "psk": "/tmp/tmp.ezoJopq28O", 00:17:53.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.424 "hdgst": false, 00:17:53.424 "ddgst": false 00:17:53.424 } 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "method": "bdev_nvme_set_hotplug", 00:17:53.424 "params": { 00:17:53.424 "period_us": 100000, 00:17:53.424 "enable": false 00:17:53.424 } 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "method": "bdev_wait_for_examine" 00:17:53.424 } 00:17:53.424 ] 00:17:53.424 }, 00:17:53.424 { 00:17:53.424 "subsystem": "nbd", 00:17:53.424 "config": [] 00:17:53.424 } 00:17:53.424 ] 00:17:53.424 }' 00:17:53.711 [2024-07-14 21:17:05.046902] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:53.711 [2024-07-14 21:17:05.047092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76682 ] 00:17:53.711 [2024-07-14 21:17:05.221124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.969 [2024-07-14 21:17:05.397628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.228 [2024-07-14 21:17:05.644356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:54.228 [2024-07-14 21:17:05.734519] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.228 [2024-07-14 21:17:05.734699] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:54.487 21:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.487 21:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.487 21:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:54.745 Running I/O for 10 seconds... 00:18:04.811 00:18:04.812 Latency(us) 00:18:04.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.812 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.812 Verification LBA range: start 0x0 length 0x2000 00:18:04.812 TLSTESTn1 : 10.04 2869.72 11.21 0.00 0.00 44508.69 12511.42 29789.09 00:18:04.812 =================================================================================================================== 00:18:04.812 Total : 2869.72 11.21 0.00 0.00 44508.69 12511.42 29789.09 00:18:04.812 0 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 76682 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76682 ']' 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76682 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76682 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.812 killing process with pid 76682 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76682' 00:18:04.812 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.812 00:18:04.812 Latency(us) 00:18:04.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.812 =================================================================================================================== 00:18:04.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76682 00:18:04.812 [2024-07-14 21:17:16.170503] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:04.812 21:17:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76682 00:18:05.748 21:17:17 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 76650 00:18:05.748 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76650 ']' 00:18:05.748 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76650 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76650 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:05.749 killing process with pid 76650 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76650' 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76650 00:18:05.749 [2024-07-14 21:17:17.281328] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:05.749 21:17:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76650 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76837 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76837 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76837 ']' 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.127 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.128 21:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.128 [2024-07-14 21:17:18.529392] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:07.128 [2024-07-14 21:17:18.529537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.386 [2024-07-14 21:17:18.696367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.386 [2024-07-14 21:17:18.906863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.386 [2024-07-14 21:17:18.906919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.386 [2024-07-14 21:17:18.906951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.386 [2024-07-14 21:17:18.906964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.386 [2024-07-14 21:17:18.906975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.386 [2024-07-14 21:17:18.907027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.644 [2024-07-14 21:17:19.081537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.902 21:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.902 21:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:07.902 21:17:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.902 21:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.902 21:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.160 21:17:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.160 21:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ezoJopq28O 00:18:08.160 21:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ezoJopq28O 00:18:08.160 21:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:08.417 [2024-07-14 21:17:19.729370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.418 21:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:08.418 21:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:08.675 [2024-07-14 21:17:20.197715] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.675 [2024-07-14 21:17:20.198030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.675 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.933 malloc0 00:18:08.933 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:09.191 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ezoJopq28O 00:18:09.447 [2024-07-14 21:17:20.878532] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=76886 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 76886 /var/tmp/bdevperf.sock 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76886 ']' 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.447 21:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.447 [2024-07-14 21:17:20.984695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:09.447 [2024-07-14 21:17:20.984910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76886 ] 00:18:09.705 [2024-07-14 21:17:21.147170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.963 [2024-07-14 21:17:21.361222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.221 [2024-07-14 21:17:21.529204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:10.479 21:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.479 21:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.479 21:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ezoJopq28O 00:18:10.737 21:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:10.995 [2024-07-14 21:17:22.305201] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.995 nvme0n1 00:18:10.995 21:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:10.995 Running I/O for 1 seconds... 00:18:12.367 00:18:12.368 Latency(us) 00:18:12.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.368 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.368 Verification LBA range: start 0x0 length 0x2000 00:18:12.368 nvme0n1 : 1.03 2921.75 11.41 0.00 0.00 43035.02 8579.26 26810.18 00:18:12.368 =================================================================================================================== 00:18:12.368 Total : 2921.75 11.41 0.00 0.00 43035.02 8579.26 26810.18 00:18:12.368 0 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 76886 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76886 ']' 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76886 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76886 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.368 killing process with pid 76886 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76886' 00:18:12.368 Received shutdown signal, test time was about 1.000000 seconds 00:18:12.368 00:18:12.368 Latency(us) 00:18:12.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.368 =================================================================================================================== 00:18:12.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76886 00:18:12.368 21:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76886 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 76837 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76837 ']' 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76837 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76837 00:18:13.300 killing process with pid 76837 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76837' 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76837 00:18:13.300 [2024-07-14 21:17:24.593359] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:13.300 21:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76837 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76956 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76956 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76956 ']' 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.233 21:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.233 [2024-07-14 21:17:25.745193] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:14.233 [2024-07-14 21:17:25.745359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.492 [2024-07-14 21:17:25.904944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.751 [2024-07-14 21:17:26.063421] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.751 [2024-07-14 21:17:26.063494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.751 [2024-07-14 21:17:26.063526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.751 [2024-07-14 21:17:26.063540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.751 [2024-07-14 21:17:26.063550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.751 [2024-07-14 21:17:26.063586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.751 [2024-07-14 21:17:26.242376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.318 [2024-07-14 21:17:26.690396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.318 malloc0 00:18:15.318 [2024-07-14 21:17:26.742280] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.318 [2024-07-14 21:17:26.742545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=76988 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 76988 /var/tmp/bdevperf.sock 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76988 ']' 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.318 21:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.318 [2024-07-14 21:17:26.863369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:15.318 [2024-07-14 21:17:26.863559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76988 ] 00:18:15.576 [2024-07-14 21:17:27.027126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.835 [2024-07-14 21:17:27.245502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.094 [2024-07-14 21:17:27.413031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.352 21:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.352 21:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.352 21:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ezoJopq28O 00:18:16.611 21:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.869 [2024-07-14 21:17:28.264376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.869 nvme0n1 00:18:16.869 21:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.128 Running I/O for 1 seconds... 00:18:18.065 00:18:18.065 Latency(us) 00:18:18.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.065 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:18.065 Verification LBA range: start 0x0 length 0x2000 00:18:18.065 nvme0n1 : 1.02 3138.44 12.26 0.00 0.00 40239.72 975.59 25856.93 00:18:18.065 =================================================================================================================== 00:18:18.065 Total : 3138.44 12.26 0.00 0.00 40239.72 975.59 25856.93 00:18:18.065 0 00:18:18.065 21:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:18.065 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.065 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.324 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.324 21:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:18.324 "subsystems": [ 00:18:18.324 { 00:18:18.324 "subsystem": "keyring", 00:18:18.324 "config": [ 00:18:18.324 { 00:18:18.324 "method": "keyring_file_add_key", 00:18:18.324 "params": { 00:18:18.324 "name": "key0", 00:18:18.324 "path": "/tmp/tmp.ezoJopq28O" 00:18:18.324 } 00:18:18.324 } 00:18:18.324 ] 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "subsystem": "iobuf", 00:18:18.324 "config": [ 00:18:18.324 { 00:18:18.324 "method": "iobuf_set_options", 00:18:18.324 "params": { 00:18:18.324 "small_pool_count": 8192, 00:18:18.324 "large_pool_count": 1024, 00:18:18.324 "small_bufsize": 8192, 00:18:18.324 "large_bufsize": 135168 00:18:18.324 } 00:18:18.324 } 00:18:18.324 ] 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "subsystem": "sock", 00:18:18.324 "config": [ 00:18:18.324 { 00:18:18.324 "method": "sock_set_default_impl", 00:18:18.324 "params": { 00:18:18.324 "impl_name": "uring" 00:18:18.324 } 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "method": "sock_impl_set_options", 00:18:18.324 "params": { 00:18:18.324 "impl_name": "ssl", 00:18:18.324 "recv_buf_size": 4096, 00:18:18.324 "send_buf_size": 4096, 00:18:18.324 "enable_recv_pipe": true, 00:18:18.324 "enable_quickack": false, 00:18:18.324 "enable_placement_id": 0, 00:18:18.324 "enable_zerocopy_send_server": true, 00:18:18.324 "enable_zerocopy_send_client": false, 00:18:18.324 "zerocopy_threshold": 0, 00:18:18.324 "tls_version": 0, 00:18:18.324 "enable_ktls": false 00:18:18.324 } 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "method": "sock_impl_set_options", 00:18:18.324 "params": { 00:18:18.324 "impl_name": "posix", 00:18:18.324 "recv_buf_size": 2097152, 00:18:18.324 "send_buf_size": 2097152, 00:18:18.324 "enable_recv_pipe": true, 00:18:18.324 "enable_quickack": false, 00:18:18.324 "enable_placement_id": 0, 00:18:18.324 "enable_zerocopy_send_server": true, 00:18:18.324 "enable_zerocopy_send_client": false, 00:18:18.324 "zerocopy_threshold": 0, 00:18:18.324 "tls_version": 0, 00:18:18.324 "enable_ktls": false 00:18:18.324 } 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "method": "sock_impl_set_options", 00:18:18.324 "params": { 00:18:18.324 "impl_name": "uring", 00:18:18.324 "recv_buf_size": 2097152, 00:18:18.324 "send_buf_size": 2097152, 00:18:18.324 "enable_recv_pipe": true, 00:18:18.324 "enable_quickack": false, 00:18:18.324 "enable_placement_id": 0, 00:18:18.324 "enable_zerocopy_send_server": false, 00:18:18.324 "enable_zerocopy_send_client": false, 00:18:18.324 "zerocopy_threshold": 0, 00:18:18.324 "tls_version": 0, 00:18:18.324 "enable_ktls": false 00:18:18.324 } 00:18:18.324 } 00:18:18.324 ] 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "subsystem": "vmd", 00:18:18.324 "config": [] 00:18:18.324 }, 00:18:18.324 { 00:18:18.324 "subsystem": "accel", 00:18:18.324 "config": [ 00:18:18.324 { 00:18:18.324 "method": "accel_set_options", 00:18:18.324 "params": { 00:18:18.324 "small_cache_size": 128, 00:18:18.324 "large_cache_size": 16, 00:18:18.324 "task_count": 2048, 00:18:18.324 "sequence_count": 2048, 00:18:18.324 "buf_count": 2048 00:18:18.324 } 00:18:18.324 } 00:18:18.324 ] 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "subsystem": "bdev", 00:18:18.325 "config": [ 00:18:18.325 { 00:18:18.325 "method": "bdev_set_options", 00:18:18.325 "params": { 00:18:18.325 "bdev_io_pool_size": 65535, 00:18:18.325 "bdev_io_cache_size": 256, 00:18:18.325 "bdev_auto_examine": true, 00:18:18.325 "iobuf_small_cache_size": 128, 00:18:18.325 "iobuf_large_cache_size": 16 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "bdev_raid_set_options", 00:18:18.325 "params": { 00:18:18.325 "process_window_size_kb": 1024 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "bdev_iscsi_set_options", 00:18:18.325 "params": { 00:18:18.325 "timeout_sec": 30 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "bdev_nvme_set_options", 00:18:18.325 "params": { 00:18:18.325 "action_on_timeout": "none", 00:18:18.325 "timeout_us": 0, 00:18:18.325 "timeout_admin_us": 0, 00:18:18.325 "keep_alive_timeout_ms": 10000, 00:18:18.325 "arbitration_burst": 0, 00:18:18.325 "low_priority_weight": 0, 00:18:18.325 "medium_priority_weight": 0, 00:18:18.325 "high_priority_weight": 0, 00:18:18.325 "nvme_adminq_poll_period_us": 10000, 00:18:18.325 "nvme_ioq_poll_period_us": 0, 00:18:18.325 "io_queue_requests": 0, 00:18:18.325 "delay_cmd_submit": true, 00:18:18.325 "transport_retry_count": 4, 00:18:18.325 "bdev_retry_count": 3, 00:18:18.325 "transport_ack_timeout": 0, 00:18:18.325 "ctrlr_loss_timeout_sec": 0, 00:18:18.325 "reconnect_delay_sec": 0, 00:18:18.325 "fast_io_fail_timeout_sec": 0, 00:18:18.325 "disable_auto_failback": false, 00:18:18.325 "generate_uuids": false, 00:18:18.325 "transport_tos": 0, 00:18:18.325 "nvme_error_stat": false, 00:18:18.325 "rdma_srq_size": 0, 00:18:18.325 "io_path_stat": false, 00:18:18.325 "allow_accel_sequence": false, 00:18:18.325 "rdma_max_cq_size": 0, 00:18:18.325 "rdma_cm_event_timeout_ms": 0, 00:18:18.325 "dhchap_digests": [ 00:18:18.325 "sha256", 00:18:18.325 "sha384", 00:18:18.325 "sha512" 00:18:18.325 ], 00:18:18.325 "dhchap_dhgroups": [ 00:18:18.325 "null", 00:18:18.325 "ffdhe2048", 00:18:18.325 "ffdhe3072", 00:18:18.325 "ffdhe4096", 00:18:18.325 "ffdhe6144", 00:18:18.325 "ffdhe8192" 00:18:18.325 ] 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "bdev_nvme_set_hotplug", 00:18:18.325 "params": { 00:18:18.325 "period_us": 100000, 00:18:18.325 "enable": false 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "bdev_malloc_create", 00:18:18.325 "params": { 00:18:18.325 "name": "malloc0", 00:18:18.325 "num_blocks": 8192, 00:18:18.325 "block_size": 4096, 00:18:18.325 "physical_block_size": 4096, 00:18:18.325 "uuid": "3cf9d091-337c-4a3a-8ca6-234239188de5", 00:18:18.325 "optimal_io_boundary": 0 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "bdev_wait_for_examine" 00:18:18.325 } 00:18:18.325 ] 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "subsystem": "nbd", 00:18:18.325 "config": [] 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "subsystem": "scheduler", 00:18:18.325 "config": [ 00:18:18.325 { 00:18:18.325 "method": "framework_set_scheduler", 00:18:18.325 "params": { 00:18:18.325 "name": "static" 00:18:18.325 } 00:18:18.325 } 00:18:18.325 ] 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "subsystem": "nvmf", 00:18:18.325 "config": [ 00:18:18.325 { 00:18:18.325 "method": "nvmf_set_config", 00:18:18.325 "params": { 00:18:18.325 "discovery_filter": "match_any", 00:18:18.325 "admin_cmd_passthru": { 00:18:18.325 "identify_ctrlr": false 00:18:18.325 } 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_set_max_subsystems", 00:18:18.325 "params": { 00:18:18.325 "max_subsystems": 1024 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_set_crdt", 00:18:18.325 "params": { 00:18:18.325 "crdt1": 0, 00:18:18.325 "crdt2": 0, 00:18:18.325 "crdt3": 0 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_create_transport", 00:18:18.325 "params": { 00:18:18.325 "trtype": "TCP", 00:18:18.325 "max_queue_depth": 128, 00:18:18.325 "max_io_qpairs_per_ctrlr": 127, 00:18:18.325 "in_capsule_data_size": 4096, 00:18:18.325 "max_io_size": 131072, 00:18:18.325 "io_unit_size": 131072, 00:18:18.325 "max_aq_depth": 128, 00:18:18.325 "num_shared_buffers": 511, 00:18:18.325 "buf_cache_size": 4294967295, 00:18:18.325 "dif_insert_or_strip": false, 00:18:18.325 "zcopy": false, 00:18:18.325 "c2h_success": false, 00:18:18.325 "sock_priority": 0, 00:18:18.325 "abort_timeout_sec": 1, 00:18:18.325 "ack_timeout": 0, 00:18:18.325 "data_wr_pool_size": 0 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_create_subsystem", 00:18:18.325 "params": { 00:18:18.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.325 "allow_any_host": false, 00:18:18.325 "serial_number": "00000000000000000000", 00:18:18.325 "model_number": "SPDK bdev Controller", 00:18:18.325 "max_namespaces": 32, 00:18:18.325 "min_cntlid": 1, 00:18:18.325 "max_cntlid": 65519, 00:18:18.325 "ana_reporting": false 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_subsystem_add_host", 00:18:18.325 "params": { 00:18:18.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.325 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.325 "psk": "key0" 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_subsystem_add_ns", 00:18:18.325 "params": { 00:18:18.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.325 "namespace": { 00:18:18.325 "nsid": 1, 00:18:18.325 "bdev_name": "malloc0", 00:18:18.325 "nguid": "3CF9D091337C4A3A8CA6234239188DE5", 00:18:18.325 "uuid": "3cf9d091-337c-4a3a-8ca6-234239188de5", 00:18:18.325 "no_auto_visible": false 00:18:18.325 } 00:18:18.325 } 00:18:18.325 }, 00:18:18.325 { 00:18:18.325 "method": "nvmf_subsystem_add_listener", 00:18:18.325 "params": { 00:18:18.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.325 "listen_address": { 00:18:18.325 "trtype": "TCP", 00:18:18.325 "adrfam": "IPv4", 00:18:18.325 "traddr": "10.0.0.2", 00:18:18.325 "trsvcid": "4420" 00:18:18.325 }, 00:18:18.325 "secure_channel": true 00:18:18.325 } 00:18:18.325 } 00:18:18.325 ] 00:18:18.325 } 00:18:18.325 ] 00:18:18.325 }' 00:18:18.325 21:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:18.584 21:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:18.584 "subsystems": [ 00:18:18.584 { 00:18:18.584 "subsystem": "keyring", 00:18:18.584 "config": [ 00:18:18.584 { 00:18:18.584 "method": "keyring_file_add_key", 00:18:18.584 "params": { 00:18:18.584 "name": "key0", 00:18:18.584 "path": "/tmp/tmp.ezoJopq28O" 00:18:18.584 } 00:18:18.584 } 00:18:18.584 ] 00:18:18.584 }, 00:18:18.584 { 00:18:18.584 "subsystem": "iobuf", 00:18:18.584 "config": [ 00:18:18.584 { 00:18:18.584 "method": "iobuf_set_options", 00:18:18.584 "params": { 00:18:18.584 "small_pool_count": 8192, 00:18:18.584 "large_pool_count": 1024, 00:18:18.584 "small_bufsize": 8192, 00:18:18.584 "large_bufsize": 135168 00:18:18.584 } 00:18:18.584 } 00:18:18.584 ] 00:18:18.584 }, 00:18:18.584 { 00:18:18.584 "subsystem": "sock", 00:18:18.584 "config": [ 00:18:18.584 { 00:18:18.584 "method": "sock_set_default_impl", 00:18:18.584 "params": { 00:18:18.584 "impl_name": "uring" 00:18:18.584 } 00:18:18.584 }, 00:18:18.584 { 00:18:18.584 "method": "sock_impl_set_options", 00:18:18.584 "params": { 00:18:18.584 "impl_name": "ssl", 00:18:18.584 "recv_buf_size": 4096, 00:18:18.584 "send_buf_size": 4096, 00:18:18.584 "enable_recv_pipe": true, 00:18:18.584 "enable_quickack": false, 00:18:18.584 "enable_placement_id": 0, 00:18:18.584 "enable_zerocopy_send_server": true, 00:18:18.584 "enable_zerocopy_send_client": false, 00:18:18.584 "zerocopy_threshold": 0, 00:18:18.584 "tls_version": 0, 00:18:18.584 "enable_ktls": false 00:18:18.584 } 00:18:18.584 }, 00:18:18.584 { 00:18:18.584 "method": "sock_impl_set_options", 00:18:18.584 "params": { 00:18:18.584 "impl_name": "posix", 00:18:18.584 "recv_buf_size": 2097152, 00:18:18.584 "send_buf_size": 2097152, 00:18:18.584 "enable_recv_pipe": true, 00:18:18.585 "enable_quickack": false, 00:18:18.585 "enable_placement_id": 0, 00:18:18.585 "enable_zerocopy_send_server": true, 00:18:18.585 "enable_zerocopy_send_client": false, 00:18:18.585 "zerocopy_threshold": 0, 00:18:18.585 "tls_version": 0, 00:18:18.585 "enable_ktls": false 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "sock_impl_set_options", 00:18:18.585 "params": { 00:18:18.585 "impl_name": "uring", 00:18:18.585 "recv_buf_size": 2097152, 00:18:18.585 "send_buf_size": 2097152, 00:18:18.585 "enable_recv_pipe": true, 00:18:18.585 "enable_quickack": false, 00:18:18.585 "enable_placement_id": 0, 00:18:18.585 "enable_zerocopy_send_server": false, 00:18:18.585 "enable_zerocopy_send_client": false, 00:18:18.585 "zerocopy_threshold": 0, 00:18:18.585 "tls_version": 0, 00:18:18.585 "enable_ktls": false 00:18:18.585 } 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "vmd", 00:18:18.585 "config": [] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "accel", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "accel_set_options", 00:18:18.585 "params": { 00:18:18.585 "small_cache_size": 128, 00:18:18.585 "large_cache_size": 16, 00:18:18.585 "task_count": 2048, 00:18:18.585 "sequence_count": 2048, 00:18:18.585 "buf_count": 2048 00:18:18.585 } 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "bdev", 00:18:18.585 "config": [ 00:18:18.585 { 00:18:18.585 "method": "bdev_set_options", 00:18:18.585 "params": { 00:18:18.585 "bdev_io_pool_size": 65535, 00:18:18.585 "bdev_io_cache_size": 256, 00:18:18.585 "bdev_auto_examine": true, 00:18:18.585 "iobuf_small_cache_size": 128, 00:18:18.585 "iobuf_large_cache_size": 16 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_raid_set_options", 00:18:18.585 "params": { 00:18:18.585 "process_window_size_kb": 1024 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_iscsi_set_options", 00:18:18.585 "params": { 00:18:18.585 "timeout_sec": 30 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_nvme_set_options", 00:18:18.585 "params": { 00:18:18.585 "action_on_timeout": "none", 00:18:18.585 "timeout_us": 0, 00:18:18.585 "timeout_admin_us": 0, 00:18:18.585 "keep_alive_timeout_ms": 10000, 00:18:18.585 "arbitration_burst": 0, 00:18:18.585 "low_priority_weight": 0, 00:18:18.585 "medium_priority_weight": 0, 00:18:18.585 "high_priority_weight": 0, 00:18:18.585 "nvme_adminq_poll_period_us": 10000, 00:18:18.585 "nvme_ioq_poll_period_us": 0, 00:18:18.585 "io_queue_requests": 512, 00:18:18.585 "delay_cmd_submit": true, 00:18:18.585 "transport_retry_count": 4, 00:18:18.585 "bdev_retry_count": 3, 00:18:18.585 "transport_ack_timeout": 0, 00:18:18.585 "ctrlr_loss_timeout_sec": 0, 00:18:18.585 "reconnect_delay_sec": 0, 00:18:18.585 "fast_io_fail_timeout_sec": 0, 00:18:18.585 "disable_auto_failback": false, 00:18:18.585 "generate_uuids": false, 00:18:18.585 "transport_tos": 0, 00:18:18.585 "nvme_error_stat": false, 00:18:18.585 "rdma_srq_size": 0, 00:18:18.585 "io_path_stat": false, 00:18:18.585 "allow_accel_sequence": false, 00:18:18.585 "rdma_max_cq_size": 0, 00:18:18.585 "rdma_cm_event_timeout_ms": 0, 00:18:18.585 "dhchap_digests": [ 00:18:18.585 "sha256", 00:18:18.585 "sha384", 00:18:18.585 "sha512" 00:18:18.585 ], 00:18:18.585 "dhchap_dhgroups": [ 00:18:18.585 "null", 00:18:18.585 "ffdhe2048", 00:18:18.585 "ffdhe3072", 00:18:18.585 "ffdhe4096", 00:18:18.585 "ffdhe6144", 00:18:18.585 "ffdhe8192" 00:18:18.585 ] 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_nvme_attach_controller", 00:18:18.585 "params": { 00:18:18.585 "name": "nvme0", 00:18:18.585 "trtype": "TCP", 00:18:18.585 "adrfam": "IPv4", 00:18:18.585 "traddr": "10.0.0.2", 00:18:18.585 "trsvcid": "4420", 00:18:18.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.585 "prchk_reftag": false, 00:18:18.585 "prchk_guard": false, 00:18:18.585 "ctrlr_loss_timeout_sec": 0, 00:18:18.585 "reconnect_delay_sec": 0, 00:18:18.585 "fast_io_fail_timeout_sec": 0, 00:18:18.585 "psk": "key0", 00:18:18.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.585 "hdgst": false, 00:18:18.585 "ddgst": false 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_nvme_set_hotplug", 00:18:18.585 "params": { 00:18:18.585 "period_us": 100000, 00:18:18.585 "enable": false 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_enable_histogram", 00:18:18.585 "params": { 00:18:18.585 "name": "nvme0n1", 00:18:18.585 "enable": true 00:18:18.585 } 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "method": "bdev_wait_for_examine" 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }, 00:18:18.585 { 00:18:18.585 "subsystem": "nbd", 00:18:18.585 "config": [] 00:18:18.585 } 00:18:18.585 ] 00:18:18.585 }' 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 76988 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76988 ']' 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76988 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76988 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:18.585 killing process with pid 76988 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76988' 00:18:18.585 Received shutdown signal, test time was about 1.000000 seconds 00:18:18.585 00:18:18.585 Latency(us) 00:18:18.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.585 =================================================================================================================== 00:18:18.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76988 00:18:18.585 21:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76988 00:18:19.520 21:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 76956 00:18:19.520 21:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76956 ']' 00:18:19.520 21:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76956 00:18:19.520 21:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:19.520 21:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.520 21:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76956 00:18:19.520 21:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:19.520 killing process with pid 76956 00:18:19.520 21:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:19.520 21:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76956' 00:18:19.520 21:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76956 00:18:19.520 21:17:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76956 00:18:20.897 21:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:20.897 21:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.897 21:17:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:20.897 "subsystems": [ 00:18:20.897 { 00:18:20.897 "subsystem": "keyring", 00:18:20.897 "config": [ 00:18:20.897 { 00:18:20.897 "method": "keyring_file_add_key", 00:18:20.897 "params": { 00:18:20.897 "name": "key0", 00:18:20.897 "path": "/tmp/tmp.ezoJopq28O" 00:18:20.897 } 00:18:20.897 } 00:18:20.897 ] 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "subsystem": "iobuf", 00:18:20.897 "config": [ 00:18:20.897 { 00:18:20.897 "method": "iobuf_set_options", 00:18:20.897 "params": { 00:18:20.897 "small_pool_count": 8192, 00:18:20.897 "large_pool_count": 1024, 00:18:20.897 "small_bufsize": 8192, 00:18:20.897 "large_bufsize": 135168 00:18:20.897 } 00:18:20.897 } 00:18:20.897 ] 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "subsystem": "sock", 00:18:20.897 "config": [ 00:18:20.897 { 00:18:20.897 "method": "sock_set_default_impl", 00:18:20.897 "params": { 00:18:20.897 "impl_name": "uring" 00:18:20.897 } 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "method": "sock_impl_set_options", 00:18:20.897 "params": { 00:18:20.897 "impl_name": "ssl", 00:18:20.897 "recv_buf_size": 4096, 00:18:20.897 "send_buf_size": 4096, 00:18:20.897 "enable_recv_pipe": true, 00:18:20.897 "enable_quickack": false, 00:18:20.897 "enable_placement_id": 0, 00:18:20.897 "enable_zerocopy_send_server": true, 00:18:20.897 "enable_zerocopy_send_client": false, 00:18:20.897 "zerocopy_threshold": 0, 00:18:20.897 "tls_version": 0, 00:18:20.897 "enable_ktls": false 00:18:20.897 } 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "method": "sock_impl_set_options", 00:18:20.897 "params": { 00:18:20.897 "impl_name": "posix", 00:18:20.897 "recv_buf_size": 2097152, 00:18:20.897 "send_buf_size": 2097152, 00:18:20.897 "enable_recv_pipe": true, 00:18:20.897 "enable_quickack": false, 00:18:20.897 "enable_placement_id": 0, 00:18:20.897 "enable_zerocopy_send_server": true, 00:18:20.897 "enable_zerocopy_send_client": false, 00:18:20.897 "zerocopy_threshold": 0, 00:18:20.897 "tls_version": 0, 00:18:20.897 "enable_ktls": false 00:18:20.897 } 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "method": "sock_impl_set_options", 00:18:20.897 "params": { 00:18:20.897 "impl_name": "uring", 00:18:20.897 "recv_buf_size": 2097152, 00:18:20.897 "send_buf_size": 2097152, 00:18:20.897 "enable_recv_pipe": true, 00:18:20.897 "enable_quickack": false, 00:18:20.897 "enable_placement_id": 0, 00:18:20.897 "enable_zerocopy_send_server": false, 00:18:20.897 "enable_zerocopy_send_client": false, 00:18:20.897 "zerocopy_threshold": 0, 00:18:20.897 "tls_version": 0, 00:18:20.897 "enable_ktls": false 00:18:20.897 } 00:18:20.897 } 00:18:20.897 ] 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "subsystem": "vmd", 00:18:20.897 "config": [] 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "subsystem": "accel", 00:18:20.897 "config": [ 00:18:20.897 { 00:18:20.897 "method": "accel_set_options", 00:18:20.897 "params": { 00:18:20.897 "small_cache_size": 128, 00:18:20.897 "large_cache_size": 16, 00:18:20.897 "task_count": 2048, 00:18:20.897 "sequence_count": 2048, 00:18:20.897 "buf_count": 2048 00:18:20.897 } 00:18:20.897 } 00:18:20.897 ] 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "subsystem": "bdev", 00:18:20.897 "config": [ 00:18:20.897 { 00:18:20.897 "method": "bdev_set_options", 00:18:20.897 "params": { 00:18:20.897 "bdev_io_pool_size": 65535, 00:18:20.897 "bdev_io_cache_size": 256, 00:18:20.897 "bdev_auto_examine": true, 00:18:20.897 "iobuf_small_cache_size": 128, 00:18:20.897 "iobuf_large_cache_size": 16 00:18:20.897 } 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "method": "bdev_raid_set_options", 00:18:20.897 "params": { 00:18:20.897 "process_window_size_kb": 1024 00:18:20.897 } 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "method": "bdev_iscsi_set_options", 00:18:20.897 "params": { 00:18:20.897 "timeout_sec": 30 00:18:20.897 } 00:18:20.897 }, 00:18:20.897 { 00:18:20.897 "method": "bdev_nvme_set_options", 00:18:20.897 "params": { 00:18:20.897 "action_on_timeout": "none", 00:18:20.897 "timeout_us": 0, 00:18:20.897 "timeout_admin_us": 0, 00:18:20.897 "keep_alive_timeout_ms": 10000, 00:18:20.897 "arbitration_burst": 0, 00:18:20.897 "low_priority_weight": 0, 00:18:20.897 "medium_priority_weight": 0, 00:18:20.897 "high_priority_weight": 0, 00:18:20.897 "nvme_adminq_poll_period_us": 10000, 00:18:20.897 "nvme_ioq_poll_period_us": 0, 00:18:20.897 "io_queue_requests": 0, 00:18:20.897 "delay_cmd_submit": true, 00:18:20.897 "transport_retry_count": 4, 00:18:20.897 "bdev_retry_count": 3, 00:18:20.897 "transport_ack_timeout": 0, 00:18:20.897 "ctrlr_loss_timeout_sec": 0, 00:18:20.897 "reconnect_delay_sec": 0, 00:18:20.897 "fast_io_fail_timeout_sec": 0, 00:18:20.897 "disable_auto_failback": false, 00:18:20.897 "generate_uuids": false, 00:18:20.897 "transport_tos": 0, 00:18:20.897 "nvme_error_stat": false, 00:18:20.897 "rdma_srq_size": 0, 00:18:20.897 "io_path_stat": false, 00:18:20.897 "allow_accel_sequence": false, 00:18:20.897 "rdma_max_cq_size": 0, 00:18:20.897 "rdma_cm_event_timeout_ms": 0, 00:18:20.897 "dhchap_digests": [ 00:18:20.897 "sha256", 00:18:20.897 "sha384", 00:18:20.897 "sha512" 00:18:20.897 ], 00:18:20.897 "dhchap_dhgroups": [ 00:18:20.897 "null", 00:18:20.897 "ffdhe2048", 00:18:20.897 "ffdhe3072", 00:18:20.898 "ffdhe4096", 00:18:20.898 "ffdhe6144", 00:18:20.898 "ffdhe8192" 00:18:20.898 ] 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "bdev_nvme_set_hotplug", 00:18:20.898 "params": { 00:18:20.898 "period_us": 100000, 00:18:20.898 "enable": false 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "bdev_malloc_create", 00:18:20.898 "params": { 00:18:20.898 "name": "malloc0", 00:18:20.898 "num_blocks": 8192, 00:18:20.898 "block_size": 4096, 00:18:20.898 "physical_block_size": 4096, 00:18:20.898 "uuid": "3cf9d091-337c-4a3a-8ca6-234239188de5", 00:18:20.898 "optimal_io_boundary": 0 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "bdev_wait_for_examine" 00:18:20.898 } 00:18:20.898 ] 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "subsystem": "nbd", 00:18:20.898 "config": [] 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "subsystem": "scheduler", 00:18:20.898 "config": [ 00:18:20.898 { 00:18:20.898 "method": "framework_set_scheduler", 00:18:20.898 "params": { 00:18:20.898 "name": "static" 00:18:20.898 } 00:18:20.898 } 00:18:20.898 ] 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "subsystem": "nvmf", 00:18:20.898 "config": [ 00:18:20.898 { 00:18:20.898 "method": "nvmf_set_config", 00:18:20.898 "params": { 00:18:20.898 "discovery_filter": "match_any", 00:18:20.898 "admin_cmd_passthru": { 00:18:20.898 "identify_ctrlr": false 00:18:20.898 } 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_set_max_subsystems", 00:18:20.898 "params": { 00:18:20.898 "max_subsystems": 1024 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_set_crdt", 00:18:20.898 "params": { 00:18:20.898 "crdt1": 0, 00:18:20.898 "crdt2": 0, 00:18:20.898 "crdt3": 0 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_create_transport", 00:18:20.898 "params": { 00:18:20.898 "trtype": "TCP", 00:18:20.898 "max_queue_depth": 128, 00:18:20.898 "max_io_qpairs_per_ctrlr": 127, 00:18:20.898 "in_capsule_data_size": 4096, 00:18:20.898 "max_io_size": 131072, 00:18:20.898 "io_unit_size": 131072, 00:18:20.898 "max_aq_depth": 128, 00:18:20.898 "num_shared_buffers": 511, 00:18:20.898 "buf_cache_size": 4294967295, 00:18:20.898 "dif_insert_or_strip": false, 00:18:20.898 "zcopy": false, 00:18:20.898 "c2h_success": false, 00:18:20.898 "sock_priority": 0, 00:18:20.898 "abort_timeout_sec": 1, 00:18:20.898 "ack_timeout": 0, 00:18:20.898 "data_wr_pool_size": 0 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_create_subsystem", 00:18:20.898 "params": { 00:18:20.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.898 "allow_any_host": false, 00:18:20.898 "serial_number": "00000000000000000000", 00:18:20.898 "model_number": "SPDK bdev Controller", 00:18:20.898 "max_namespaces": 32, 00:18:20.898 "min_cntlid": 1, 00:18:20.898 "max_cntlid": 65519, 00:18:20.898 "ana_reporting": false 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_subsystem_add_host", 00:18:20.898 "params": { 00:18:20.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.898 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.898 "psk": "key0" 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_subsystem_add_ns", 00:18:20.898 "params": { 00:18:20.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.898 "namespace": { 00:18:20.898 "nsid": 1, 00:18:20.898 "bdev_name": "malloc0", 00:18:20.898 "nguid": "3CF9D091337C4A3A8CA6234239188DE5", 00:18:20.898 "uuid": "3cf9d091-337c-4a3a-8ca6-234239188de5", 00:18:20.898 "no_auto_visible": false 00:18:20.898 } 00:18:20.898 } 00:18:20.898 }, 00:18:20.898 { 00:18:20.898 "method": "nvmf_subsystem_add_listener", 00:18:20.898 "params": { 00:18:20.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.898 "listen_address": { 00:18:20.898 "trtype": "TCP", 00:18:20.898 "adrfam": "IPv4", 00:18:20.898 "traddr": "10.0.0.2", 00:18:20.898 "trsvcid": "4420" 00:18:20.898 }, 00:18:20.898 "secure_channel": true 00:18:20.898 } 00:18:20.898 } 00:18:20.898 ] 00:18:20.898 } 00:18:20.898 ] 00:18:20.898 }' 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77067 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77067 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77067 ']' 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.898 21:17:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.898 [2024-07-14 21:17:32.187749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.898 [2024-07-14 21:17:32.187996] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.898 [2024-07-14 21:17:32.352627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.157 [2024-07-14 21:17:32.524515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.157 [2024-07-14 21:17:32.524638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.157 [2024-07-14 21:17:32.524671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.157 [2024-07-14 21:17:32.524685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.157 [2024-07-14 21:17:32.524695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.157 [2024-07-14 21:17:32.524834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.415 [2024-07-14 21:17:32.799902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:21.415 [2024-07-14 21:17:32.944471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.673 [2024-07-14 21:17:32.976413] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.673 [2024-07-14 21:17:32.983939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=77095 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 77095 /var/tmp/bdevperf.sock 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77095 ']' 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.673 21:17:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:21.673 "subsystems": [ 00:18:21.673 { 00:18:21.673 "subsystem": "keyring", 00:18:21.673 "config": [ 00:18:21.673 { 00:18:21.673 "method": "keyring_file_add_key", 00:18:21.673 "params": { 00:18:21.673 "name": "key0", 00:18:21.673 "path": "/tmp/tmp.ezoJopq28O" 00:18:21.673 } 00:18:21.674 } 00:18:21.674 ] 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "subsystem": "iobuf", 00:18:21.674 "config": [ 00:18:21.674 { 00:18:21.674 "method": "iobuf_set_options", 00:18:21.674 "params": { 00:18:21.674 "small_pool_count": 8192, 00:18:21.674 "large_pool_count": 1024, 00:18:21.674 "small_bufsize": 8192, 00:18:21.674 "large_bufsize": 135168 00:18:21.674 } 00:18:21.674 } 00:18:21.674 ] 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "subsystem": "sock", 00:18:21.674 "config": [ 00:18:21.674 { 00:18:21.674 "method": "sock_set_default_impl", 00:18:21.674 "params": { 00:18:21.674 "impl_name": "uring" 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "sock_impl_set_options", 00:18:21.674 "params": { 00:18:21.674 "impl_name": "ssl", 00:18:21.674 "recv_buf_size": 4096, 00:18:21.674 "send_buf_size": 4096, 00:18:21.674 "enable_recv_pipe": true, 00:18:21.674 "enable_quickack": false, 00:18:21.674 "enable_placement_id": 0, 00:18:21.674 "enable_zerocopy_send_server": true, 00:18:21.674 "enable_zerocopy_send_client": false, 00:18:21.674 "zerocopy_threshold": 0, 00:18:21.674 "tls_version": 0, 00:18:21.674 "enable_ktls": false 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "sock_impl_set_options", 00:18:21.674 "params": { 00:18:21.674 "impl_name": "posix", 00:18:21.674 "recv_buf_size": 2097152, 00:18:21.674 "send_buf_size": 2097152, 00:18:21.674 "enable_recv_pipe": true, 00:18:21.674 "enable_quickack": false, 00:18:21.674 "enable_placement_id": 0, 00:18:21.674 "enable_zerocopy_send_server": true, 00:18:21.674 "enable_zerocopy_send_client": false, 00:18:21.674 "zerocopy_threshold": 0, 00:18:21.674 "tls_version": 0, 00:18:21.674 "enable_ktls": false 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "sock_impl_set_options", 00:18:21.674 "params": { 00:18:21.674 "impl_name": "uring", 00:18:21.674 "recv_buf_size": 2097152, 00:18:21.674 "send_buf_size": 2097152, 00:18:21.674 "enable_recv_pipe": true, 00:18:21.674 "enable_quickack": false, 00:18:21.674 "enable_placement_id": 0, 00:18:21.674 "enable_zerocopy_send_server": false, 00:18:21.674 "enable_zerocopy_send_client": false, 00:18:21.674 "zerocopy_threshold": 0, 00:18:21.674 "tls_version": 0, 00:18:21.674 "enable_ktls": false 00:18:21.674 } 00:18:21.674 } 00:18:21.674 ] 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "subsystem": "vmd", 00:18:21.674 "config": [] 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "subsystem": "accel", 00:18:21.674 "config": [ 00:18:21.674 { 00:18:21.674 "method": "accel_set_options", 00:18:21.674 "params": { 00:18:21.674 "small_cache_size": 128, 00:18:21.674 "large_cache_size": 16, 00:18:21.674 "task_count": 2048, 00:18:21.674 "sequence_count": 2048, 00:18:21.674 "buf_count": 2048 00:18:21.674 } 00:18:21.674 } 00:18:21.674 ] 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "subsystem": "bdev", 00:18:21.674 "config": [ 00:18:21.674 { 00:18:21.674 "method": "bdev_set_options", 00:18:21.674 "params": { 00:18:21.674 "bdev_io_pool_size": 65535, 00:18:21.674 "bdev_io_cache_size": 256, 00:18:21.674 "bdev_auto_examine": true, 00:18:21.674 "iobuf_small_cache_size": 128, 00:18:21.674 "iobuf_large_cache_size": 16 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_raid_set_options", 00:18:21.674 "params": { 00:18:21.674 "process_window_size_kb": 1024 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_iscsi_set_options", 00:18:21.674 "params": { 00:18:21.674 "timeout_sec": 30 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_nvme_set_options", 00:18:21.674 "params": { 00:18:21.674 "action_on_timeout": "none", 00:18:21.674 "timeout_us": 0, 00:18:21.674 "timeout_admin_us": 0, 00:18:21.674 "keep_alive_timeout_ms": 10000, 00:18:21.674 "arbitration_burst": 0, 00:18:21.674 "low_priority_weight": 0, 00:18:21.674 "medium_priority_weight": 0, 00:18:21.674 "high_priority_weight": 0, 00:18:21.674 "nvme_adminq_poll_period_us": 10000, 00:18:21.674 "nvme_ioq_poll_period_us": 0, 00:18:21.674 "io_queue_requests": 512, 00:18:21.674 "delay_cmd_submit": true, 00:18:21.674 "transport_retry_count": 4, 00:18:21.674 "bdev_retry_count": 3, 00:18:21.674 "transport_ack_timeout": 0, 00:18:21.674 "ctrlr_loss_timeout_sec": 0, 00:18:21.674 "reconnect_delay_sec": 0, 00:18:21.674 "fast_io_fail_timeout_sec": 0, 00:18:21.674 "disable_auto_failback": false, 00:18:21.674 "generate_uuids": false, 00:18:21.674 "transport_tos": 0, 00:18:21.674 "nvme_error_stat": false, 00:18:21.674 "rdma_srq_size": 0, 00:18:21.674 "io_path_stat": false, 00:18:21.674 "allow_accel_sequence": false, 00:18:21.674 "rdma_max_cq_size": 0, 00:18:21.674 "rdma_cm_event_timeout_ms": 0, 00:18:21.674 "dhchap_digests": [ 00:18:21.674 "sha256", 00:18:21.674 "sha384", 00:18:21.674 "sha512" 00:18:21.674 ], 00:18:21.674 "dhchap_dhgroups": [ 00:18:21.674 "null", 00:18:21.674 "ffdhe2048", 00:18:21.674 "ffdhe3072", 00:18:21.674 "ffdhe4096", 00:18:21.674 "ffdhe6144", 00:18:21.674 "ffdhe8192" 00:18:21.674 ] 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_nvme_attach_controller", 00:18:21.674 "params": { 00:18:21.674 "name": "nvme0", 00:18:21.674 "trtype": "TCP", 00:18:21.674 "adrfam": "IPv4", 00:18:21.674 "traddr": "10.0.0.2", 00:18:21.674 "trsvcid": "4420", 00:18:21.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.674 "prchk_reftag": false, 00:18:21.674 "prchk_guard": false, 00:18:21.674 "ctrlr_loss_timeout_sec": 0, 00:18:21.674 "reconnect_delay_sec": 0, 00:18:21.674 "fast_io_fail_timeout_sec": 0, 00:18:21.674 "psk": "key0", 00:18:21.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.674 "hdgst": false, 00:18:21.674 "ddgst": false 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_nvme_set_hotplug", 00:18:21.674 "params": { 00:18:21.674 "period_us": 100000, 00:18:21.674 "enable": false 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_enable_histogram", 00:18:21.674 "params": { 00:18:21.674 "name": "nvme0n1", 00:18:21.674 "enable": true 00:18:21.674 } 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "method": "bdev_wait_for_examine" 00:18:21.674 } 00:18:21.674 ] 00:18:21.674 }, 00:18:21.674 { 00:18:21.674 "subsystem": "nbd", 00:18:21.674 "config": [] 00:18:21.674 } 00:18:21.674 ] 00:18:21.674 }' 00:18:21.674 21:17:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.674 [2024-07-14 21:17:33.216272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:21.674 [2024-07-14 21:17:33.216440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77095 ] 00:18:21.931 [2024-07-14 21:17:33.388495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.188 [2024-07-14 21:17:33.588874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.445 [2024-07-14 21:17:33.840356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:22.445 [2024-07-14 21:17:33.940860] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.702 21:17:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.702 21:17:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:22.702 21:17:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:22.702 21:17:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:22.959 21:17:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.959 21:17:34 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:22.959 Running I/O for 1 seconds... 00:18:24.328 00:18:24.328 Latency(us) 00:18:24.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.328 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:24.328 Verification LBA range: start 0x0 length 0x2000 00:18:24.328 nvme0n1 : 1.03 2895.12 11.31 0.00 0.00 43469.84 4557.73 26333.56 00:18:24.328 =================================================================================================================== 00:18:24.328 Total : 2895.12 11.31 0.00 0.00 43469.84 4557.73 26333.56 00:18:24.328 0 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:24.328 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:24.329 nvmf_trace.0 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 77095 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77095 ']' 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77095 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77095 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:24.329 killing process with pid 77095 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77095' 00:18:24.329 Received shutdown signal, test time was about 1.000000 seconds 00:18:24.329 00:18:24.329 Latency(us) 00:18:24.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.329 =================================================================================================================== 00:18:24.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77095 00:18:24.329 21:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77095 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.271 rmmod nvme_tcp 00:18:25.271 rmmod nvme_fabrics 00:18:25.271 rmmod nvme_keyring 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 77067 ']' 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 77067 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77067 ']' 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77067 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77067 00:18:25.271 killing process with pid 77067 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77067' 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77067 00:18:25.271 21:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77067 00:18:26.655 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.656 21:17:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:26.656 21:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.n53SKgSHfH /tmp/tmp.CVkDLvU6mb /tmp/tmp.ezoJopq28O 00:18:26.656 00:18:26.656 real 1m42.476s 00:18:26.656 user 2m44.187s 00:18:26.656 sys 0m26.627s 00:18:26.656 ************************************ 00:18:26.656 END TEST nvmf_tls 00:18:26.656 ************************************ 00:18:26.656 21:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.656 21:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 21:17:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:26.656 21:17:38 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:26.656 21:17:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:26.656 21:17:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.656 21:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.656 ************************************ 00:18:26.656 START TEST nvmf_fips 00:18:26.656 ************************************ 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:26.656 * Looking for test storage... 00:18:26.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:26.656 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:26.657 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:26.915 Error setting digest 00:18:26.915 00D2EDACB97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:26.915 00D2EDACB97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:26.915 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:26.916 Cannot find device "nvmf_tgt_br" 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.916 Cannot find device "nvmf_tgt_br2" 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:26.916 Cannot find device "nvmf_tgt_br" 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:26.916 Cannot find device "nvmf_tgt_br2" 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.916 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:27.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:18:27.174 00:18:27.174 --- 10.0.0.2 ping statistics --- 00:18:27.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.174 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:27.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:27.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:27.174 00:18:27.174 --- 10.0.0.3 ping statistics --- 00:18:27.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.174 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:27.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:27.174 00:18:27.174 --- 10.0.0.1 ping statistics --- 00:18:27.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.174 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=77396 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 77396 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.174 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 77396 ']' 00:18:27.175 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.175 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.175 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.175 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.175 21:17:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.433 [2024-07-14 21:17:38.731372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:27.433 [2024-07-14 21:17:38.731515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.433 [2024-07-14 21:17:38.889392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.692 [2024-07-14 21:17:39.071965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.692 [2024-07-14 21:17:39.072061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.692 [2024-07-14 21:17:39.072078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.692 [2024-07-14 21:17:39.072123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.692 [2024-07-14 21:17:39.072135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.692 [2024-07-14 21:17:39.072171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.951 [2024-07-14 21:17:39.250331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:28.209 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:28.210 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.468 [2024-07-14 21:17:39.885194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.469 [2024-07-14 21:17:39.901111] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.469 [2024-07-14 21:17:39.901374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.469 [2024-07-14 21:17:39.950528] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:28.469 malloc0 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=77431 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 77431 /var/tmp/bdevperf.sock 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 77431 ']' 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.469 21:17:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:28.728 [2024-07-14 21:17:40.125559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:28.728 [2024-07-14 21:17:40.125723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77431 ] 00:18:28.987 [2024-07-14 21:17:40.291166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.987 [2024-07-14 21:17:40.513949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.245 [2024-07-14 21:17:40.683912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:29.525 21:17:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.525 21:17:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:29.525 21:17:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:29.789 [2024-07-14 21:17:41.185011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.789 [2024-07-14 21:17:41.185229] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:29.789 TLSTESTn1 00:18:29.789 21:17:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.045 Running I/O for 10 seconds... 00:18:40.011 00:18:40.012 Latency(us) 00:18:40.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.012 Verification LBA range: start 0x0 length 0x2000 00:18:40.012 TLSTESTn1 : 10.04 2830.81 11.06 0.00 0.00 45124.30 9115.46 30504.03 00:18:40.012 =================================================================================================================== 00:18:40.012 Total : 2830.81 11.06 0.00 0.00 45124.30 9115.46 30504.03 00:18:40.012 0 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:40.012 nvmf_trace.0 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 77431 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 77431 ']' 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 77431 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77431 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:40.012 killing process with pid 77431 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77431' 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 77431 00:18:40.012 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.012 00:18:40.012 Latency(us) 00:18:40.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.012 =================================================================================================================== 00:18:40.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.012 [2024-07-14 21:17:51.556882] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:40.012 21:17:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 77431 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:41.391 rmmod nvme_tcp 00:18:41.391 rmmod nvme_fabrics 00:18:41.391 rmmod nvme_keyring 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 77396 ']' 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 77396 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 77396 ']' 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 77396 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77396 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:41.391 killing process with pid 77396 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77396' 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 77396 00:18:41.391 [2024-07-14 21:17:52.730042] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:41.391 21:17:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 77396 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:42.325 ************************************ 00:18:42.325 END TEST nvmf_fips 00:18:42.325 ************************************ 00:18:42.325 00:18:42.325 real 0m15.807s 00:18:42.325 user 0m22.527s 00:18:42.325 sys 0m5.287s 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.325 21:17:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:42.583 21:17:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:42.583 21:17:53 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:18:42.583 21:17:53 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:42.583 21:17:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:42.583 21:17:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.583 21:17:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.583 ************************************ 00:18:42.583 START TEST nvmf_fuzz 00:18:42.583 ************************************ 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:42.583 * Looking for test storage... 00:18:42.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.583 21:17:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.583 21:17:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:42.584 Cannot find device "nvmf_tgt_br" 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.584 Cannot find device "nvmf_tgt_br2" 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:42.584 Cannot find device "nvmf_tgt_br" 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:42.584 Cannot find device "nvmf_tgt_br2" 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:18:42.584 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:42.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:42.843 00:18:42.843 --- 10.0.0.2 ping statistics --- 00:18:42.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.843 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:42.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:18:42.843 00:18:42.843 --- 10.0.0.3 ping statistics --- 00:18:42.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.843 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:42.843 00:18:42.843 --- 10.0.0.1 ping statistics --- 00:18:42.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.843 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77771 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77771 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 77771 ']' 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.843 21:17:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.221 Malloc0 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:44.221 21:17:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:45.156 Shutting down the fuzz application 00:18:45.156 21:17:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:46.093 Shutting down the fuzz application 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.093 rmmod nvme_tcp 00:18:46.093 rmmod nvme_fabrics 00:18:46.093 rmmod nvme_keyring 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 77771 ']' 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 77771 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 77771 ']' 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 77771 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77771 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.093 killing process with pid 77771 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77771' 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 77771 00:18:46.093 21:17:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 77771 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:47.473 00:18:47.473 real 0m4.932s 00:18:47.473 user 0m5.958s 00:18:47.473 sys 0m0.791s 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.473 ************************************ 00:18:47.473 END TEST nvmf_fuzz 00:18:47.473 ************************************ 00:18:47.473 21:17:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:47.473 21:17:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.473 21:17:58 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:47.473 21:17:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.473 21:17:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.473 21:17:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.473 ************************************ 00:18:47.473 START TEST nvmf_multiconnection 00:18:47.473 ************************************ 00:18:47.473 21:17:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:47.473 * Looking for test storage... 00:18:47.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:47.473 21:17:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.473 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:47.473 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.473 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.474 21:17:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.474 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:47.748 Cannot find device "nvmf_tgt_br" 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.748 Cannot find device "nvmf_tgt_br2" 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:47.748 Cannot find device "nvmf_tgt_br" 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:47.748 Cannot find device "nvmf_tgt_br2" 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:47.748 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:48.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:48.007 00:18:48.007 --- 10.0.0.2 ping statistics --- 00:18:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.007 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:48.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:48.007 00:18:48.007 --- 10.0.0.3 ping statistics --- 00:18:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.007 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:48.007 00:18:48.007 --- 10.0.0.1 ping statistics --- 00:18:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.007 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=77985 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 77985 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 77985 ']' 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.007 21:17:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.007 [2024-07-14 21:17:59.500905] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:48.007 [2024-07-14 21:17:59.501068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.266 [2024-07-14 21:17:59.679275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.524 [2024-07-14 21:17:59.886430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.524 [2024-07-14 21:17:59.886495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.524 [2024-07-14 21:17:59.886514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.524 [2024-07-14 21:17:59.886530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.524 [2024-07-14 21:17:59.886546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.524 [2024-07-14 21:17:59.886733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.524 [2024-07-14 21:17:59.886916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.525 [2024-07-14 21:17:59.887915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.525 [2024-07-14 21:17:59.887923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.783 [2024-07-14 21:18:00.074771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.042 [2024-07-14 21:18:00.488053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.042 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 Malloc1 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 [2024-07-14 21:18:00.619216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 Malloc2 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 Malloc3 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.301 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.560 Malloc4 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.560 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 Malloc5 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 Malloc6 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.561 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 Malloc7 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 Malloc8 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 Malloc9 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 Malloc10 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 Malloc11 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.079 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.338 21:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:50.338 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:50.338 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.338 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:50.338 21:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.243 21:18:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:52.501 21:18:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:52.501 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.501 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.501 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:52.501 21:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.402 21:18:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:54.660 21:18:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:54.660 21:18:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:54.660 21:18:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.660 21:18:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:54.660 21:18:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:56.558 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:56.558 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:56.558 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:18:56.558 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:56.558 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.559 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:56.559 21:18:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.559 21:18:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:56.816 21:18:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:56.816 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:56.816 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.816 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:56.816 21:18:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.717 21:18:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:58.975 21:18:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:58.975 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:58.975 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.975 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:58.975 21:18:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.874 21:18:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:01.133 21:18:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:01.133 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:01.133 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.133 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:01.133 21:18:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.030 21:18:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:03.288 21:18:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:03.288 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:03.288 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.288 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:03.288 21:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.189 21:18:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:05.447 21:18:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:05.447 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:05.447 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.447 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:05.447 21:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.348 21:18:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:07.606 21:18:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:07.606 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:07.606 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.606 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:07.606 21:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:09.509 21:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:09.509 21:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:09.509 21:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:19:09.509 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:09.509 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.509 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:09.509 21:18:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.509 21:18:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:09.767 21:18:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:09.767 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:09.767 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.767 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:09.767 21:18:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:11.669 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:11.670 21:18:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:11.942 21:18:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:11.942 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:11.942 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.942 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:11.942 21:18:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:13.845 21:18:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:13.845 [global] 00:19:13.845 thread=1 00:19:13.845 invalidate=1 00:19:13.845 rw=read 00:19:13.845 time_based=1 00:19:13.845 runtime=10 00:19:13.845 ioengine=libaio 00:19:13.845 direct=1 00:19:13.845 bs=262144 00:19:13.845 iodepth=64 00:19:13.845 norandommap=1 00:19:13.845 numjobs=1 00:19:13.845 00:19:13.845 [job0] 00:19:13.845 filename=/dev/nvme0n1 00:19:13.845 [job1] 00:19:13.845 filename=/dev/nvme10n1 00:19:14.104 [job2] 00:19:14.104 filename=/dev/nvme1n1 00:19:14.104 [job3] 00:19:14.104 filename=/dev/nvme2n1 00:19:14.104 [job4] 00:19:14.104 filename=/dev/nvme3n1 00:19:14.104 [job5] 00:19:14.104 filename=/dev/nvme4n1 00:19:14.104 [job6] 00:19:14.104 filename=/dev/nvme5n1 00:19:14.104 [job7] 00:19:14.104 filename=/dev/nvme6n1 00:19:14.104 [job8] 00:19:14.104 filename=/dev/nvme7n1 00:19:14.104 [job9] 00:19:14.104 filename=/dev/nvme8n1 00:19:14.104 [job10] 00:19:14.104 filename=/dev/nvme9n1 00:19:14.104 Could not set queue depth (nvme0n1) 00:19:14.104 Could not set queue depth (nvme10n1) 00:19:14.104 Could not set queue depth (nvme1n1) 00:19:14.104 Could not set queue depth (nvme2n1) 00:19:14.104 Could not set queue depth (nvme3n1) 00:19:14.104 Could not set queue depth (nvme4n1) 00:19:14.104 Could not set queue depth (nvme5n1) 00:19:14.104 Could not set queue depth (nvme6n1) 00:19:14.104 Could not set queue depth (nvme7n1) 00:19:14.104 Could not set queue depth (nvme8n1) 00:19:14.104 Could not set queue depth (nvme9n1) 00:19:14.104 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.104 fio-3.35 00:19:14.104 Starting 11 threads 00:19:26.340 00:19:26.340 job0: (groupid=0, jobs=1): err= 0: pid=78444: Sun Jul 14 21:18:36 2024 00:19:26.340 read: IOPS=413, BW=103MiB/s (108MB/s)(1047MiB/10121msec) 00:19:26.340 slat (usec): min=17, max=87792, avg=2378.16, stdev=6683.60 00:19:26.340 clat (msec): min=13, max=275, avg=152.16, stdev=37.28 00:19:26.340 lat (msec): min=13, max=275, avg=154.54, stdev=38.15 00:19:26.340 clat percentiles (msec): 00:19:26.340 | 1.00th=[ 51], 5.00th=[ 64], 10.00th=[ 74], 20.00th=[ 155], 00:19:26.340 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:19:26.340 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 176], 95.00th=[ 180], 00:19:26.340 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 262], 99.95th=[ 262], 00:19:26.340 | 99.99th=[ 275] 00:19:26.340 bw ( KiB/s): min=92487, max=217010, per=5.95%, avg=105531.80, stdev=28857.99, samples=20 00:19:26.340 iops : min= 361, max= 847, avg=412.10, stdev=112.61, samples=20 00:19:26.340 lat (msec) : 20=0.02%, 50=0.91%, 100=14.35%, 250=84.45%, 500=0.26% 00:19:26.340 cpu : usr=0.17%, sys=1.68%, ctx=1014, majf=0, minf=4097 00:19:26.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:26.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.340 issued rwts: total=4187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.340 job1: (groupid=0, jobs=1): err= 0: pid=78445: Sun Jul 14 21:18:36 2024 00:19:26.340 read: IOPS=622, BW=156MiB/s (163MB/s)(1571MiB/10087msec) 00:19:26.340 slat (usec): min=20, max=27395, avg=1588.68, stdev=3391.90 00:19:26.340 clat (msec): min=17, max=205, avg=101.01, stdev=16.17 00:19:26.340 lat (msec): min=17, max=205, avg=102.60, stdev=16.35 00:19:26.340 clat percentiles (msec): 00:19:26.340 | 1.00th=[ 58], 5.00th=[ 71], 10.00th=[ 86], 20.00th=[ 94], 00:19:26.340 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:19:26.340 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 129], 00:19:26.340 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 199], 99.95th=[ 205], 00:19:26.340 | 99.99th=[ 207] 00:19:26.340 bw ( KiB/s): min=125691, max=210432, per=8.97%, avg=159206.00, stdev=20097.84, samples=20 00:19:26.340 iops : min= 490, max= 822, avg=621.65, stdev=78.58, samples=20 00:19:26.340 lat (msec) : 20=0.02%, 50=0.59%, 100=47.06%, 250=52.34% 00:19:26.340 cpu : usr=0.38%, sys=2.66%, ctx=1472, majf=0, minf=4097 00:19:26.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:26.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.340 issued rwts: total=6284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.340 job2: (groupid=0, jobs=1): err= 0: pid=78446: Sun Jul 14 21:18:36 2024 00:19:26.340 read: IOPS=624, BW=156MiB/s (164MB/s)(1575MiB/10089msec) 00:19:26.340 slat (usec): min=17, max=30955, avg=1580.19, stdev=3471.95 00:19:26.340 clat (msec): min=8, max=206, avg=100.80, stdev=18.14 00:19:26.340 lat (msec): min=8, max=206, avg=102.38, stdev=18.39 00:19:26.340 clat percentiles (msec): 00:19:26.340 | 1.00th=[ 47], 5.00th=[ 67], 10.00th=[ 82], 20.00th=[ 94], 00:19:26.340 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:19:26.340 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 122], 95.00th=[ 134], 00:19:26.340 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 190], 99.95th=[ 194], 00:19:26.340 | 99.99th=[ 207] 00:19:26.340 bw ( KiB/s): min=125178, max=218112, per=8.99%, avg=159577.30, stdev=23567.19, samples=20 00:19:26.340 iops : min= 488, max= 852, avg=623.10, stdev=92.14, samples=20 00:19:26.341 lat (msec) : 10=0.03%, 20=0.21%, 50=0.86%, 100=46.37%, 250=52.53% 00:19:26.341 cpu : usr=0.45%, sys=2.20%, ctx=1462, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=6299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job3: (groupid=0, jobs=1): err= 0: pid=78447: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=890, BW=223MiB/s (233MB/s)(2230MiB/10015msec) 00:19:26.341 slat (usec): min=20, max=32270, avg=1115.94, stdev=2484.72 00:19:26.341 clat (msec): min=14, max=122, avg=70.66, stdev= 9.76 00:19:26.341 lat (msec): min=22, max=122, avg=71.77, stdev= 9.82 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:19:26.341 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:19:26.341 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 92], 00:19:26.341 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 115], 99.95th=[ 117], 00:19:26.341 | 99.99th=[ 123] 00:19:26.341 bw ( KiB/s): min=153907, max=248846, per=12.77%, avg=226634.30, stdev=23861.92, samples=20 00:19:26.341 iops : min= 601, max= 972, avg=885.20, stdev=93.20, samples=20 00:19:26.341 lat (msec) : 20=0.01%, 50=0.82%, 100=95.82%, 250=3.35% 00:19:26.341 cpu : usr=0.45%, sys=3.62%, ctx=1886, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=8919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job4: (groupid=0, jobs=1): err= 0: pid=78448: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=890, BW=223MiB/s (234MB/s)(2231MiB/10017msec) 00:19:26.341 slat (usec): min=17, max=41567, avg=1116.31, stdev=2560.66 00:19:26.341 clat (msec): min=15, max=130, avg=70.62, stdev= 9.83 00:19:26.341 lat (msec): min=18, max=133, avg=71.73, stdev= 9.91 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 65], 00:19:26.341 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 71], 00:19:26.341 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 90], 00:19:26.341 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 126], 99.95th=[ 127], 00:19:26.341 | 99.99th=[ 131] 00:19:26.341 bw ( KiB/s): min=153088, max=251392, per=12.78%, avg=226773.80, stdev=24017.44, samples=20 00:19:26.341 iops : min= 598, max= 982, avg=885.75, stdev=93.78, samples=20 00:19:26.341 lat (msec) : 20=0.13%, 50=0.31%, 100=96.53%, 250=3.03% 00:19:26.341 cpu : usr=0.46%, sys=2.84%, ctx=1942, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=8925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job5: (groupid=0, jobs=1): err= 0: pid=78449: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=382, BW=95.5MiB/s (100MB/s)(967MiB/10120msec) 00:19:26.341 slat (usec): min=16, max=89190, avg=2588.02, stdev=6338.68 00:19:26.341 clat (msec): min=40, max=276, avg=164.74, stdev=15.10 00:19:26.341 lat (msec): min=40, max=281, avg=167.32, stdev=15.79 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 125], 5.00th=[ 138], 10.00th=[ 153], 20.00th=[ 159], 00:19:26.341 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 167], 00:19:26.341 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 178], 95.00th=[ 182], 00:19:26.341 | 99.00th=[ 207], 99.50th=[ 236], 99.90th=[ 266], 99.95th=[ 266], 00:19:26.341 | 99.99th=[ 275] 00:19:26.341 bw ( KiB/s): min=90443, max=104960, per=5.48%, avg=97263.90, stdev=3427.98, samples=20 00:19:26.341 iops : min= 353, max= 410, avg=379.85, stdev=13.44, samples=20 00:19:26.341 lat (msec) : 50=0.21%, 100=0.03%, 250=99.43%, 500=0.34% 00:19:26.341 cpu : usr=0.19%, sys=1.40%, ctx=937, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=3866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job6: (groupid=0, jobs=1): err= 0: pid=78450: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=385, BW=96.5MiB/s (101MB/s)(977MiB/10121msec) 00:19:26.341 slat (usec): min=17, max=66650, avg=2557.61, stdev=6725.20 00:19:26.341 clat (msec): min=15, max=279, avg=163.06, stdev=18.92 00:19:26.341 lat (msec): min=15, max=279, avg=165.62, stdev=19.79 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 73], 5.00th=[ 134], 10.00th=[ 146], 20.00th=[ 159], 00:19:26.341 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:19:26.341 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 176], 95.00th=[ 180], 00:19:26.341 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 271], 99.95th=[ 271], 00:19:26.341 | 99.99th=[ 279] 00:19:26.341 bw ( KiB/s): min=92487, max=119535, per=5.54%, avg=98354.45, stdev=6309.56, samples=20 00:19:26.341 iops : min= 361, max= 466, avg=384.05, stdev=24.48, samples=20 00:19:26.341 lat (msec) : 20=0.03%, 50=0.31%, 100=1.28%, 250=98.26%, 500=0.13% 00:19:26.341 cpu : usr=0.17%, sys=1.79%, ctx=944, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=3906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job7: (groupid=0, jobs=1): err= 0: pid=78451: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=1364, BW=341MiB/s (358MB/s)(3441MiB/10086msec) 00:19:26.341 slat (usec): min=16, max=100601, avg=716.74, stdev=2121.35 00:19:26.341 clat (msec): min=10, max=217, avg=46.11, stdev=27.44 00:19:26.341 lat (msec): min=10, max=217, avg=46.83, stdev=27.81 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:19:26.341 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 39], 00:19:26.341 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 49], 95.00th=[ 122], 00:19:26.341 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 197], 00:19:26.341 | 99.99th=[ 218] 00:19:26.341 bw ( KiB/s): min=115200, max=442880, per=19.76%, avg=350640.35, stdev=130806.16, samples=20 00:19:26.341 iops : min= 450, max= 1730, avg=1369.50, stdev=511.01, samples=20 00:19:26.341 lat (msec) : 20=0.15%, 50=89.93%, 100=1.08%, 250=8.84% 00:19:26.341 cpu : usr=0.62%, sys=4.05%, ctx=3028, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=13765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job8: (groupid=0, jobs=1): err= 0: pid=78452: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=384, BW=96.0MiB/s (101MB/s)(972MiB/10125msec) 00:19:26.341 slat (usec): min=17, max=72092, avg=2570.10, stdev=6748.99 00:19:26.341 clat (msec): min=19, max=274, avg=163.87, stdev=17.12 00:19:26.341 lat (msec): min=20, max=286, avg=166.44, stdev=18.02 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 106], 5.00th=[ 133], 10.00th=[ 148], 20.00th=[ 159], 00:19:26.341 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:19:26.341 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 176], 95.00th=[ 182], 00:19:26.341 | 99.00th=[ 213], 99.50th=[ 228], 99.90th=[ 275], 99.95th=[ 275], 00:19:26.341 | 99.99th=[ 275] 00:19:26.341 bw ( KiB/s): min=88064, max=116736, per=5.52%, avg=97881.65, stdev=5889.73, samples=20 00:19:26.341 iops : min= 344, max= 456, avg=382.25, stdev=23.02, samples=20 00:19:26.341 lat (msec) : 20=0.03%, 50=0.15%, 100=0.39%, 250=99.10%, 500=0.33% 00:19:26.341 cpu : usr=0.19%, sys=1.60%, ctx=969, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job9: (groupid=0, jobs=1): err= 0: pid=78453: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=385, BW=96.4MiB/s (101MB/s)(976MiB/10121msec) 00:19:26.341 slat (usec): min=16, max=65505, avg=2563.82, stdev=5985.74 00:19:26.341 clat (msec): min=62, max=271, avg=163.17, stdev=17.78 00:19:26.341 lat (msec): min=62, max=271, avg=165.74, stdev=18.42 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 70], 5.00th=[ 133], 10.00th=[ 146], 20.00th=[ 159], 00:19:26.341 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 167], 00:19:26.341 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 178], 95.00th=[ 184], 00:19:26.341 | 99.00th=[ 197], 99.50th=[ 215], 99.90th=[ 271], 99.95th=[ 271], 00:19:26.341 | 99.99th=[ 271] 00:19:26.341 bw ( KiB/s): min=93508, max=113378, per=5.54%, avg=98248.30, stdev=5458.35, samples=20 00:19:26.341 iops : min= 365, max= 442, avg=383.65, stdev=21.25, samples=20 00:19:26.341 lat (msec) : 100=1.18%, 250=98.57%, 500=0.26% 00:19:26.341 cpu : usr=0.22%, sys=1.26%, ctx=997, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=3904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 job10: (groupid=0, jobs=1): err= 0: pid=78454: Sun Jul 14 21:18:36 2024 00:19:26.341 read: IOPS=619, BW=155MiB/s (162MB/s)(1561MiB/10083msec) 00:19:26.341 slat (usec): min=16, max=26370, avg=1597.58, stdev=3444.12 00:19:26.341 clat (msec): min=17, max=229, avg=101.66, stdev=17.64 00:19:26.341 lat (msec): min=17, max=229, avg=103.26, stdev=17.87 00:19:26.341 clat percentiles (msec): 00:19:26.341 | 1.00th=[ 52], 5.00th=[ 69], 10.00th=[ 85], 20.00th=[ 94], 00:19:26.341 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:19:26.341 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 122], 95.00th=[ 131], 00:19:26.341 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 222], 99.95th=[ 222], 00:19:26.341 | 99.99th=[ 230] 00:19:26.341 bw ( KiB/s): min=120561, max=207775, per=8.91%, avg=158141.30, stdev=21046.87, samples=20 00:19:26.341 iops : min= 470, max= 811, avg=617.50, stdev=82.28, samples=20 00:19:26.341 lat (msec) : 20=0.05%, 50=0.91%, 100=42.63%, 250=56.41% 00:19:26.341 cpu : usr=0.31%, sys=2.16%, ctx=1453, majf=0, minf=4097 00:19:26.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:26.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.341 issued rwts: total=6242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.341 00:19:26.341 Run status group 0 (all jobs): 00:19:26.341 READ: bw=1733MiB/s (1817MB/s), 95.5MiB/s-341MiB/s (100MB/s-358MB/s), io=17.1GiB (18.4GB), run=10015-10125msec 00:19:26.341 00:19:26.341 Disk stats (read/write): 00:19:26.341 nvme0n1: ios=8253/0, merge=0/0, ticks=1226308/0, in_queue=1226308, util=97.87% 00:19:26.341 nvme10n1: ios=12443/0, merge=0/0, ticks=1230641/0, in_queue=1230641, util=97.99% 00:19:26.341 nvme1n1: ios=12471/0, merge=0/0, ticks=1230087/0, in_queue=1230087, util=98.11% 00:19:26.341 nvme2n1: ios=17768/0, merge=0/0, ticks=1239889/0, in_queue=1239889, util=98.28% 00:19:26.341 nvme3n1: ios=17788/0, merge=0/0, ticks=1241228/0, in_queue=1241228, util=98.33% 00:19:26.341 nvme4n1: ios=7609/0, merge=0/0, ticks=1224349/0, in_queue=1224349, util=98.47% 00:19:26.341 nvme5n1: ios=7695/0, merge=0/0, ticks=1226070/0, in_queue=1226070, util=98.67% 00:19:26.341 nvme6n1: ios=27410/0, merge=0/0, ticks=1234419/0, in_queue=1234419, util=98.66% 00:19:26.341 nvme7n1: ios=7657/0, merge=0/0, ticks=1226479/0, in_queue=1226479, util=98.95% 00:19:26.341 nvme8n1: ios=7682/0, merge=0/0, ticks=1226032/0, in_queue=1226032, util=99.08% 00:19:26.341 nvme9n1: ios=12386/0, merge=0/0, ticks=1231965/0, in_queue=1231965, util=99.17% 00:19:26.341 21:18:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:26.341 [global] 00:19:26.341 thread=1 00:19:26.341 invalidate=1 00:19:26.341 rw=randwrite 00:19:26.341 time_based=1 00:19:26.341 runtime=10 00:19:26.341 ioengine=libaio 00:19:26.341 direct=1 00:19:26.341 bs=262144 00:19:26.341 iodepth=64 00:19:26.341 norandommap=1 00:19:26.341 numjobs=1 00:19:26.341 00:19:26.341 [job0] 00:19:26.341 filename=/dev/nvme0n1 00:19:26.341 [job1] 00:19:26.341 filename=/dev/nvme10n1 00:19:26.341 [job2] 00:19:26.341 filename=/dev/nvme1n1 00:19:26.341 [job3] 00:19:26.341 filename=/dev/nvme2n1 00:19:26.341 [job4] 00:19:26.341 filename=/dev/nvme3n1 00:19:26.341 [job5] 00:19:26.341 filename=/dev/nvme4n1 00:19:26.341 [job6] 00:19:26.341 filename=/dev/nvme5n1 00:19:26.341 [job7] 00:19:26.341 filename=/dev/nvme6n1 00:19:26.341 [job8] 00:19:26.341 filename=/dev/nvme7n1 00:19:26.341 [job9] 00:19:26.341 filename=/dev/nvme8n1 00:19:26.341 [job10] 00:19:26.341 filename=/dev/nvme9n1 00:19:26.341 Could not set queue depth (nvme0n1) 00:19:26.341 Could not set queue depth (nvme10n1) 00:19:26.341 Could not set queue depth (nvme1n1) 00:19:26.341 Could not set queue depth (nvme2n1) 00:19:26.341 Could not set queue depth (nvme3n1) 00:19:26.341 Could not set queue depth (nvme4n1) 00:19:26.341 Could not set queue depth (nvme5n1) 00:19:26.341 Could not set queue depth (nvme6n1) 00:19:26.341 Could not set queue depth (nvme7n1) 00:19:26.341 Could not set queue depth (nvme8n1) 00:19:26.342 Could not set queue depth (nvme9n1) 00:19:26.342 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.342 fio-3.35 00:19:26.342 Starting 11 threads 00:19:36.320 00:19:36.320 job0: (groupid=0, jobs=1): err= 0: pid=78645: Sun Jul 14 21:18:46 2024 00:19:36.320 write: IOPS=352, BW=88.1MiB/s (92.3MB/s)(892MiB/10132msec); 0 zone resets 00:19:36.320 slat (usec): min=18, max=55449, avg=2796.13, stdev=5006.15 00:19:36.320 clat (msec): min=58, max=278, avg=178.82, stdev=30.66 00:19:36.320 lat (msec): min=58, max=278, avg=181.61, stdev=30.75 00:19:36.320 clat percentiles (msec): 00:19:36.320 | 1.00th=[ 130], 5.00th=[ 140], 10.00th=[ 144], 20.00th=[ 150], 00:19:36.320 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 186], 60.00th=[ 197], 00:19:36.320 | 70.00th=[ 205], 80.00th=[ 211], 90.00th=[ 215], 95.00th=[ 218], 00:19:36.320 | 99.00th=[ 228], 99.50th=[ 241], 99.90th=[ 271], 99.95th=[ 279], 00:19:36.320 | 99.99th=[ 279] 00:19:36.320 bw ( KiB/s): min=73728, max=110592, per=7.09%, avg=89745.80, stdev=14846.83, samples=20 00:19:36.320 iops : min= 288, max= 432, avg=350.55, stdev=58.01, samples=20 00:19:36.320 lat (msec) : 100=0.56%, 250=99.05%, 500=0.39% 00:19:36.320 cpu : usr=0.68%, sys=1.05%, ctx=3628, majf=0, minf=1 00:19:36.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:19:36.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.320 issued rwts: total=0,3569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.320 job1: (groupid=0, jobs=1): err= 0: pid=78646: Sun Jul 14 21:18:46 2024 00:19:36.320 write: IOPS=356, BW=89.2MiB/s (93.6MB/s)(904MiB/10134msec); 0 zone resets 00:19:36.320 slat (usec): min=17, max=50017, avg=2758.89, stdev=4881.73 00:19:36.320 clat (msec): min=18, max=279, avg=176.49, stdev=31.38 00:19:36.320 lat (msec): min=18, max=279, avg=179.25, stdev=31.52 00:19:36.320 clat percentiles (msec): 00:19:36.320 | 1.00th=[ 80], 5.00th=[ 140], 10.00th=[ 144], 20.00th=[ 150], 00:19:36.320 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 184], 60.00th=[ 192], 00:19:36.320 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 213], 95.00th=[ 213], 00:19:36.320 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 271], 99.95th=[ 279], 00:19:36.320 | 99.99th=[ 279] 00:19:36.320 bw ( KiB/s): min=75927, max=111104, per=7.19%, avg=90962.10, stdev=13901.60, samples=20 00:19:36.320 iops : min= 296, max= 434, avg=355.25, stdev=54.35, samples=20 00:19:36.320 lat (msec) : 20=0.11%, 50=0.55%, 100=0.66%, 250=98.29%, 500=0.39% 00:19:36.320 cpu : usr=0.64%, sys=1.09%, ctx=3938, majf=0, minf=1 00:19:36.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:36.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.320 issued rwts: total=0,3617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.320 job2: (groupid=0, jobs=1): err= 0: pid=78654: Sun Jul 14 21:18:46 2024 00:19:36.320 write: IOPS=335, BW=84.0MiB/s (88.1MB/s)(854MiB/10164msec); 0 zone resets 00:19:36.320 slat (usec): min=18, max=107087, avg=2868.16, stdev=5502.72 00:19:36.320 clat (msec): min=16, max=347, avg=187.56, stdev=40.75 00:19:36.320 lat (msec): min=18, max=347, avg=190.43, stdev=41.10 00:19:36.320 clat percentiles (msec): 00:19:36.320 | 1.00th=[ 35], 5.00th=[ 132], 10.00th=[ 144], 20.00th=[ 150], 00:19:36.320 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 199], 00:19:36.320 | 70.00th=[ 211], 80.00th=[ 220], 90.00th=[ 228], 95.00th=[ 232], 00:19:36.320 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 347], 00:19:36.320 | 99.99th=[ 347] 00:19:36.320 bw ( KiB/s): min=67584, max=121344, per=6.78%, avg=85777.00, stdev=14868.32, samples=20 00:19:36.320 iops : min= 264, max= 474, avg=335.05, stdev=58.08, samples=20 00:19:36.320 lat (msec) : 20=0.06%, 50=1.73%, 100=2.17%, 250=94.17%, 500=1.87% 00:19:36.320 cpu : usr=0.62%, sys=0.91%, ctx=4245, majf=0, minf=1 00:19:36.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:36.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.320 issued rwts: total=0,3414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.320 job3: (groupid=0, jobs=1): err= 0: pid=78659: Sun Jul 14 21:18:46 2024 00:19:36.320 write: IOPS=440, BW=110MiB/s (115MB/s)(1116MiB/10139msec); 0 zone resets 00:19:36.320 slat (usec): min=17, max=19399, avg=2236.60, stdev=3848.42 00:19:36.320 clat (msec): min=21, max=281, avg=143.07, stdev=14.29 00:19:36.320 lat (msec): min=21, max=281, avg=145.31, stdev=13.95 00:19:36.320 clat percentiles (msec): 00:19:36.320 | 1.00th=[ 104], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 140], 00:19:36.320 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:19:36.320 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 153], 00:19:36.320 | 99.00th=[ 182], 99.50th=[ 228], 99.90th=[ 271], 99.95th=[ 275], 00:19:36.320 | 99.99th=[ 284] 00:19:36.320 bw ( KiB/s): min=108544, max=116736, per=8.91%, avg=112653.90, stdev=3342.75, samples=20 00:19:36.320 iops : min= 424, max= 456, avg=440.05, stdev=13.05, samples=20 00:19:36.320 lat (msec) : 50=0.45%, 100=0.54%, 250=98.70%, 500=0.31% 00:19:36.320 cpu : usr=0.67%, sys=0.98%, ctx=5114, majf=0, minf=1 00:19:36.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.320 issued rwts: total=0,4464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.320 job4: (groupid=0, jobs=1): err= 0: pid=78660: Sun Jul 14 21:18:46 2024 00:19:36.320 write: IOPS=347, BW=87.0MiB/s (91.2MB/s)(881MiB/10132msec); 0 zone resets 00:19:36.320 slat (usec): min=15, max=89727, avg=2832.25, stdev=5230.40 00:19:36.320 clat (msec): min=91, max=278, avg=181.11, stdev=32.03 00:19:36.320 lat (msec): min=91, max=278, avg=183.94, stdev=32.13 00:19:36.320 clat percentiles (msec): 00:19:36.320 | 1.00th=[ 138], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 150], 00:19:36.320 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 186], 60.00th=[ 197], 00:19:36.320 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 222], 95.00th=[ 226], 00:19:36.320 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 271], 99.95th=[ 279], 00:19:36.320 | 99.99th=[ 279] 00:19:36.320 bw ( KiB/s): min=67072, max=110592, per=7.00%, avg=88594.20, stdev=16039.25, samples=20 00:19:36.320 iops : min= 262, max= 432, avg=346.05, stdev=62.67, samples=20 00:19:36.320 lat (msec) : 100=0.09%, 250=99.52%, 500=0.40% 00:19:36.320 cpu : usr=0.57%, sys=1.05%, ctx=4607, majf=0, minf=1 00:19:36.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:36.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.320 issued rwts: total=0,3524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.320 job5: (groupid=0, jobs=1): err= 0: pid=78661: Sun Jul 14 21:18:46 2024 00:19:36.320 write: IOPS=949, BW=237MiB/s (249MB/s)(2388MiB/10061msec); 0 zone resets 00:19:36.320 slat (usec): min=15, max=6876, avg=1042.56, stdev=1752.89 00:19:36.320 clat (msec): min=9, max=129, avg=66.34, stdev= 4.76 00:19:36.320 lat (msec): min=9, max=129, avg=67.38, stdev= 4.51 00:19:36.320 clat percentiles (msec): 00:19:36.320 | 1.00th=[ 61], 5.00th=[ 62], 10.00th=[ 62], 20.00th=[ 65], 00:19:36.320 | 30.00th=[ 66], 40.00th=[ 66], 50.00th=[ 66], 60.00th=[ 67], 00:19:36.320 | 70.00th=[ 69], 80.00th=[ 70], 90.00th=[ 70], 95.00th=[ 71], 00:19:36.320 | 99.00th=[ 72], 99.50th=[ 78], 99.90th=[ 122], 99.95th=[ 126], 00:19:36.320 | 99.99th=[ 130] 00:19:36.320 bw ( KiB/s): min=231936, max=252416, per=19.21%, avg=242944.95, stdev=7652.78, samples=20 00:19:36.320 iops : min= 906, max= 986, avg=948.95, stdev=29.87, samples=20 00:19:36.320 lat (msec) : 10=0.02%, 20=0.08%, 50=0.33%, 100=99.29%, 250=0.27% 00:19:36.320 cpu : usr=1.49%, sys=1.86%, ctx=11693, majf=0, minf=1 00:19:36.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:36.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.321 issued rwts: total=0,9553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.321 job6: (groupid=0, jobs=1): err= 0: pid=78662: Sun Jul 14 21:18:46 2024 00:19:36.321 write: IOPS=318, BW=79.6MiB/s (83.4MB/s)(809MiB/10160msec); 0 zone resets 00:19:36.321 slat (usec): min=17, max=127489, avg=3036.48, stdev=5838.20 00:19:36.321 clat (msec): min=62, max=337, avg=197.95, stdev=24.90 00:19:36.321 lat (msec): min=67, max=337, avg=200.98, stdev=24.76 00:19:36.321 clat percentiles (msec): 00:19:36.321 | 1.00th=[ 97], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:19:36.321 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 201], 00:19:36.321 | 70.00th=[ 211], 80.00th=[ 215], 90.00th=[ 224], 95.00th=[ 232], 00:19:36.321 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 338], 00:19:36.321 | 99.99th=[ 338] 00:19:36.321 bw ( KiB/s): min=59392, max=102912, per=6.42%, avg=81170.00, stdev=9242.38, samples=20 00:19:36.321 iops : min= 232, max= 402, avg=317.05, stdev=36.12, samples=20 00:19:36.321 lat (msec) : 100=1.05%, 250=97.56%, 500=1.39% 00:19:36.321 cpu : usr=0.63%, sys=0.94%, ctx=2779, majf=0, minf=1 00:19:36.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:36.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.321 issued rwts: total=0,3234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.321 job7: (groupid=0, jobs=1): err= 0: pid=78663: Sun Jul 14 21:18:46 2024 00:19:36.321 write: IOPS=440, BW=110MiB/s (115MB/s)(1116MiB/10135msec); 0 zone resets 00:19:36.321 slat (usec): min=16, max=11834, avg=2234.28, stdev=3828.10 00:19:36.321 clat (msec): min=15, max=280, avg=143.01, stdev=14.55 00:19:36.321 lat (msec): min=15, max=280, avg=145.24, stdev=14.23 00:19:36.321 clat percentiles (msec): 00:19:36.321 | 1.00th=[ 94], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 140], 00:19:36.321 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:19:36.321 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 153], 00:19:36.321 | 99.00th=[ 180], 99.50th=[ 226], 99.90th=[ 271], 99.95th=[ 271], 00:19:36.321 | 99.99th=[ 279] 00:19:36.321 bw ( KiB/s): min=107520, max=116736, per=8.91%, avg=112666.15, stdev=3482.74, samples=20 00:19:36.321 iops : min= 420, max= 456, avg=440.05, stdev=13.56, samples=20 00:19:36.321 lat (msec) : 20=0.07%, 50=0.45%, 100=0.54%, 250=98.63%, 500=0.31% 00:19:36.321 cpu : usr=0.80%, sys=1.30%, ctx=5354, majf=0, minf=1 00:19:36.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.321 issued rwts: total=0,4464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.321 job8: (groupid=0, jobs=1): err= 0: pid=78668: Sun Jul 14 21:18:46 2024 00:19:36.321 write: IOPS=438, BW=110MiB/s (115MB/s)(1110MiB/10136msec); 0 zone resets 00:19:36.321 slat (usec): min=18, max=36014, avg=2246.50, stdev=3874.35 00:19:36.321 clat (msec): min=42, max=277, avg=143.74, stdev=12.57 00:19:36.321 lat (msec): min=42, max=277, avg=145.98, stdev=12.12 00:19:36.321 clat percentiles (msec): 00:19:36.321 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 140], 00:19:36.321 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:19:36.321 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 153], 00:19:36.321 | 99.00th=[ 178], 99.50th=[ 224], 99.90th=[ 271], 99.95th=[ 271], 00:19:36.321 | 99.99th=[ 279] 00:19:36.321 bw ( KiB/s): min=104448, max=116736, per=8.86%, avg=112054.90, stdev=3737.55, samples=20 00:19:36.321 iops : min= 408, max= 456, avg=437.70, stdev=14.61, samples=20 00:19:36.321 lat (msec) : 50=0.09%, 100=0.63%, 250=98.96%, 500=0.32% 00:19:36.321 cpu : usr=0.81%, sys=1.37%, ctx=4993, majf=0, minf=1 00:19:36.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.321 issued rwts: total=0,4441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.321 job9: (groupid=0, jobs=1): err= 0: pid=78669: Sun Jul 14 21:18:46 2024 00:19:36.321 write: IOPS=490, BW=123MiB/s (129MB/s)(1246MiB/10166msec); 0 zone resets 00:19:36.321 slat (usec): min=16, max=12850, avg=2000.34, stdev=3604.73 00:19:36.321 clat (msec): min=15, max=346, avg=128.47, stdev=40.41 00:19:36.321 lat (msec): min=15, max=346, avg=130.47, stdev=40.85 00:19:36.321 clat percentiles (msec): 00:19:36.321 | 1.00th=[ 79], 5.00th=[ 96], 10.00th=[ 96], 20.00th=[ 102], 00:19:36.321 | 30.00th=[ 103], 40.00th=[ 103], 50.00th=[ 103], 60.00th=[ 104], 00:19:36.321 | 70.00th=[ 150], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 190], 00:19:36.321 | 99.00th=[ 209], 99.50th=[ 279], 99.90th=[ 334], 99.95th=[ 334], 00:19:36.321 | 99.99th=[ 347] 00:19:36.321 bw ( KiB/s): min=86016, max=161792, per=9.96%, avg=125994.40, stdev=34675.60, samples=20 00:19:36.321 iops : min= 336, max= 632, avg=492.15, stdev=135.47, samples=20 00:19:36.321 lat (msec) : 20=0.16%, 50=0.48%, 100=16.01%, 250=82.67%, 500=0.68% 00:19:36.321 cpu : usr=0.80%, sys=1.58%, ctx=5791, majf=0, minf=1 00:19:36.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:36.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.321 issued rwts: total=0,4985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.321 job10: (groupid=0, jobs=1): err= 0: pid=78670: Sun Jul 14 21:18:46 2024 00:19:36.321 write: IOPS=488, BW=122MiB/s (128MB/s)(1242MiB/10164msec); 0 zone resets 00:19:36.321 slat (usec): min=16, max=26931, avg=2009.95, stdev=3640.20 00:19:36.321 clat (msec): min=29, max=345, avg=128.93, stdev=39.86 00:19:36.321 lat (msec): min=29, max=345, avg=130.94, stdev=40.28 00:19:36.321 clat percentiles (msec): 00:19:36.321 | 1.00th=[ 95], 5.00th=[ 96], 10.00th=[ 96], 20.00th=[ 102], 00:19:36.321 | 30.00th=[ 103], 40.00th=[ 103], 50.00th=[ 103], 60.00th=[ 105], 00:19:36.321 | 70.00th=[ 150], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 190], 00:19:36.321 | 99.00th=[ 209], 99.50th=[ 279], 99.90th=[ 334], 99.95th=[ 334], 00:19:36.321 | 99.99th=[ 347] 00:19:36.321 bw ( KiB/s): min=84480, max=161792, per=9.92%, avg=125465.85, stdev=34368.63, samples=20 00:19:36.321 iops : min= 330, max= 632, avg=490.00, stdev=134.20, samples=20 00:19:36.321 lat (msec) : 50=0.26%, 100=15.73%, 250=83.33%, 500=0.68% 00:19:36.321 cpu : usr=0.69%, sys=1.06%, ctx=6376, majf=0, minf=1 00:19:36.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:36.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.321 issued rwts: total=0,4966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.321 00:19:36.321 Run status group 0 (all jobs): 00:19:36.321 WRITE: bw=1235MiB/s (1295MB/s), 79.6MiB/s-237MiB/s (83.4MB/s-249MB/s), io=12.3GiB (13.2GB), run=10061-10166msec 00:19:36.321 00:19:36.321 Disk stats (read/write): 00:19:36.321 nvme0n1: ios=50/6996, merge=0/0, ticks=50/1211744, in_queue=1211794, util=97.78% 00:19:36.321 nvme10n1: ios=49/7095, merge=0/0, ticks=49/1212121, in_queue=1212170, util=97.93% 00:19:36.321 nvme1n1: ios=45/6700, merge=0/0, ticks=48/1211318, in_queue=1211366, util=98.10% 00:19:36.321 nvme2n1: ios=38/8798, merge=0/0, ticks=47/1213971, in_queue=1214018, util=98.26% 00:19:36.321 nvme3n1: ios=35/6907, merge=0/0, ticks=51/1211784, in_queue=1211835, util=98.12% 00:19:36.321 nvme4n1: ios=5/18963, merge=0/0, ticks=20/1217694, in_queue=1217714, util=98.25% 00:19:36.321 nvme5n1: ios=0/6327, merge=0/0, ticks=0/1210786, in_queue=1210786, util=98.20% 00:19:36.321 nvme6n1: ios=0/8796, merge=0/0, ticks=0/1213062, in_queue=1213062, util=98.36% 00:19:36.321 nvme7n1: ios=0/8747, merge=0/0, ticks=0/1213156, in_queue=1213156, util=98.60% 00:19:36.321 nvme8n1: ios=0/9840, merge=0/0, ticks=0/1211265, in_queue=1211265, util=98.79% 00:19:36.321 nvme9n1: ios=0/9798, merge=0/0, ticks=0/1210866, in_queue=1210866, util=98.83% 00:19:36.321 21:18:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:36.321 21:18:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:36.321 21:18:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.321 21:18:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:36.321 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.321 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:36.322 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:36.322 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:36.322 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:36.322 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:36.322 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:36.322 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.322 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:36.581 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:36.581 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.581 21:18:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:36.581 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:36.581 21:18:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:36.582 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.582 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.840 rmmod nvme_tcp 00:19:36.840 rmmod nvme_fabrics 00:19:36.840 rmmod nvme_keyring 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 77985 ']' 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 77985 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 77985 ']' 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 77985 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77985 00:19:36.840 killing process with pid 77985 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77985' 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 77985 00:19:36.840 21:18:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 77985 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:40.124 ************************************ 00:19:40.124 END TEST nvmf_multiconnection 00:19:40.124 ************************************ 00:19:40.124 00:19:40.124 real 0m52.170s 00:19:40.124 user 2m51.759s 00:19:40.124 sys 0m33.281s 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.124 21:18:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.124 21:18:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:40.124 21:18:51 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:40.124 21:18:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:40.124 21:18:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.124 21:18:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.124 ************************************ 00:19:40.124 START TEST nvmf_initiator_timeout 00:19:40.124 ************************************ 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:40.124 * Looking for test storage... 00:19:40.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.124 Cannot find device "nvmf_tgt_br" 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.124 Cannot find device "nvmf_tgt_br2" 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.124 Cannot find device "nvmf_tgt_br" 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.124 Cannot find device "nvmf_tgt_br2" 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:40.124 00:19:40.124 --- 10.0.0.2 ping statistics --- 00:19:40.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.124 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:19:40.124 00:19:40.124 --- 10.0.0.3 ping statistics --- 00:19:40.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.124 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:40.124 00:19:40.124 --- 10.0.0.1 ping statistics --- 00:19:40.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.124 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=79062 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 79062 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 79062 ']' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.124 21:18:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.382 [2024-07-14 21:18:51.691399] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:40.382 [2024-07-14 21:18:51.691574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.382 [2024-07-14 21:18:51.866971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.640 [2024-07-14 21:18:52.097325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.640 [2024-07-14 21:18:52.097427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.640 [2024-07-14 21:18:52.097474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.640 [2024-07-14 21:18:52.097488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.640 [2024-07-14 21:18:52.097501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.640 [2024-07-14 21:18:52.097678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.640 [2024-07-14 21:18:52.097957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.640 [2024-07-14 21:18:52.098488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.640 [2024-07-14 21:18:52.098499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.899 [2024-07-14 21:18:52.304377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.157 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.416 Malloc0 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.416 Delay0 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.416 [2024-07-14 21:18:52.797567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.416 [2024-07-14 21:18:52.829730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:41.416 21:18:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:19:43.942 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:43.942 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=79122 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:43.943 21:18:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:43.943 [global] 00:19:43.943 thread=1 00:19:43.943 invalidate=1 00:19:43.943 rw=write 00:19:43.943 time_based=1 00:19:43.943 runtime=60 00:19:43.943 ioengine=libaio 00:19:43.943 direct=1 00:19:43.943 bs=4096 00:19:43.943 iodepth=1 00:19:43.943 norandommap=0 00:19:43.943 numjobs=1 00:19:43.943 00:19:43.943 verify_dump=1 00:19:43.943 verify_backlog=512 00:19:43.943 verify_state_save=0 00:19:43.943 do_verify=1 00:19:43.943 verify=crc32c-intel 00:19:43.943 [job0] 00:19:43.943 filename=/dev/nvme0n1 00:19:43.943 Could not set queue depth (nvme0n1) 00:19:43.943 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:43.943 fio-3.35 00:19:43.943 Starting 1 thread 00:19:46.475 21:18:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:46.475 21:18:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.475 21:18:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.475 true 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.475 true 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.475 true 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.475 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.748 true 00:19:46.748 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.748 21:18:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.081 true 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.081 true 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.081 true 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.081 true 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:50.081 21:19:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 79122 00:20:46.300 00:20:46.300 job0: (groupid=0, jobs=1): err= 0: pid=79149: Sun Jul 14 21:19:55 2024 00:20:46.300 read: IOPS=658, BW=2634KiB/s (2697kB/s)(154MiB/60000msec) 00:20:46.300 slat (nsec): min=11791, max=94720, avg=16341.60, stdev=4651.99 00:20:46.300 clat (usec): min=197, max=821, avg=250.61, stdev=27.86 00:20:46.300 lat (usec): min=209, max=849, avg=266.95, stdev=29.26 00:20:46.300 clat percentiles (usec): 00:20:46.300 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:20:46.300 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:20:46.300 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 310], 00:20:46.300 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 396], 00:20:46.300 | 99.99th=[ 562] 00:20:46.300 write: IOPS=665, BW=2662KiB/s (2726kB/s)(156MiB/60000msec); 0 zone resets 00:20:46.300 slat (usec): min=13, max=8739, avg=24.19, stdev=59.20 00:20:46.300 clat (usec): min=143, max=40512k, avg=1210.10, stdev=202720.69 00:20:46.300 lat (usec): min=161, max=40512k, avg=1234.29, stdev=202720.68 00:20:46.300 clat percentiles (usec): 00:20:46.300 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:20:46.300 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:20:46.300 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 247], 00:20:46.300 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 474], 99.95th=[ 594], 00:20:46.300 | 99.99th=[ 881] 00:20:46.300 bw ( KiB/s): min= 3248, max= 9136, per=100.00%, avg=7981.51, stdev=938.39, samples=39 00:20:46.300 iops : min= 812, max= 2284, avg=1995.36, stdev=234.59, samples=39 00:20:46.300 lat (usec) : 250=77.23%, 500=22.72%, 750=0.04%, 1000=0.01% 00:20:46.300 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:46.300 cpu : usr=0.56%, sys=2.03%, ctx=79450, majf=0, minf=2 00:20:46.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.300 issued rwts: total=39503,39936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.300 00:20:46.300 Run status group 0 (all jobs): 00:20:46.300 READ: bw=2634KiB/s (2697kB/s), 2634KiB/s-2634KiB/s (2697kB/s-2697kB/s), io=154MiB (162MB), run=60000-60000msec 00:20:46.300 WRITE: bw=2662KiB/s (2726kB/s), 2662KiB/s-2662KiB/s (2726kB/s-2726kB/s), io=156MiB (164MB), run=60000-60000msec 00:20:46.300 00:20:46.300 Disk stats (read/write): 00:20:46.300 nvme0n1: ios=39679/39572, merge=0/0, ticks=10368/8328, in_queue=18696, util=99.70% 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:46.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:46.300 nvmf hotplug test: fio successful as expected 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.300 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.301 rmmod nvme_tcp 00:20:46.301 rmmod nvme_fabrics 00:20:46.301 rmmod nvme_keyring 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 79062 ']' 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 79062 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 79062 ']' 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 79062 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79062 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:46.301 killing process with pid 79062 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79062' 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 79062 00:20:46.301 21:19:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 79062 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:46.301 00:20:46.301 real 1m5.493s 00:20:46.301 user 3m54.279s 00:20:46.301 sys 0m22.293s 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.301 ************************************ 00:20:46.301 END TEST nvmf_initiator_timeout 00:20:46.301 21:19:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:46.301 ************************************ 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:46.301 21:19:56 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:20:46.301 21:19:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.301 21:19:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.301 21:19:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:20:46.301 21:19:56 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.301 21:19:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.301 ************************************ 00:20:46.301 START TEST nvmf_identify 00:20:46.301 ************************************ 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.301 * Looking for test storage... 00:20:46.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.301 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.302 Cannot find device "nvmf_tgt_br" 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.302 Cannot find device "nvmf_tgt_br2" 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.302 Cannot find device "nvmf_tgt_br" 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.302 Cannot find device "nvmf_tgt_br2" 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.302 21:19:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:46.302 00:20:46.302 --- 10.0.0.2 ping statistics --- 00:20:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.302 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:20:46.302 00:20:46.302 --- 10.0.0.3 ping statistics --- 00:20:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.302 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:46.302 00:20:46.302 --- 10.0.0.1 ping statistics --- 00:20:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.302 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79975 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79975 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 79975 ']' 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.302 21:19:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.302 [2024-07-14 21:19:57.302504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:46.302 [2024-07-14 21:19:57.302676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.302 [2024-07-14 21:19:57.476963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.302 [2024-07-14 21:19:57.646773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.302 [2024-07-14 21:19:57.646847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.302 [2024-07-14 21:19:57.646863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.302 [2024-07-14 21:19:57.646875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.302 [2024-07-14 21:19:57.646888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.302 [2024-07-14 21:19:57.647087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.302 [2024-07-14 21:19:57.647314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.302 [2024-07-14 21:19:57.647994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.302 [2024-07-14 21:19:57.648003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.302 [2024-07-14 21:19:57.819678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 [2024-07-14 21:19:58.227464] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 Malloc0 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 [2024-07-14 21:19:58.377499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.870 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.871 [ 00:20:46.871 { 00:20:46.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:46.871 "subtype": "Discovery", 00:20:46.871 "listen_addresses": [ 00:20:46.871 { 00:20:46.871 "trtype": "TCP", 00:20:46.871 "adrfam": "IPv4", 00:20:46.871 "traddr": "10.0.0.2", 00:20:46.871 "trsvcid": "4420" 00:20:46.871 } 00:20:46.871 ], 00:20:46.871 "allow_any_host": true, 00:20:46.871 "hosts": [] 00:20:46.871 }, 00:20:46.871 { 00:20:46.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.871 "subtype": "NVMe", 00:20:46.871 "listen_addresses": [ 00:20:46.871 { 00:20:46.871 "trtype": "TCP", 00:20:46.871 "adrfam": "IPv4", 00:20:46.871 "traddr": "10.0.0.2", 00:20:46.871 "trsvcid": "4420" 00:20:46.871 } 00:20:46.871 ], 00:20:46.871 "allow_any_host": true, 00:20:46.871 "hosts": [], 00:20:46.871 "serial_number": "SPDK00000000000001", 00:20:46.871 "model_number": "SPDK bdev Controller", 00:20:46.871 "max_namespaces": 32, 00:20:46.871 "min_cntlid": 1, 00:20:46.871 "max_cntlid": 65519, 00:20:46.871 "namespaces": [ 00:20:46.871 { 00:20:46.871 "nsid": 1, 00:20:46.871 "bdev_name": "Malloc0", 00:20:46.871 "name": "Malloc0", 00:20:46.871 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:46.871 "eui64": "ABCDEF0123456789", 00:20:46.871 "uuid": "4088c33a-b3d5-4383-b3c0-8e762a93f865" 00:20:46.871 } 00:20:46.871 ] 00:20:46.871 } 00:20:46.871 ] 00:20:46.871 21:19:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.871 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:47.133 [2024-07-14 21:19:58.463078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:47.133 [2024-07-14 21:19:58.463217] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80016 ] 00:20:47.133 [2024-07-14 21:19:58.631186] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:47.133 [2024-07-14 21:19:58.631329] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:47.133 [2024-07-14 21:19:58.631343] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:47.133 [2024-07-14 21:19:58.631370] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:47.133 [2024-07-14 21:19:58.631386] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:47.133 [2024-07-14 21:19:58.631550] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:47.133 [2024-07-14 21:19:58.631616] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:47.133 [2024-07-14 21:19:58.646776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:47.133 [2024-07-14 21:19:58.646834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:47.133 [2024-07-14 21:19:58.646851] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:47.133 [2024-07-14 21:19:58.646861] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:47.133 [2024-07-14 21:19:58.646941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.646956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.646965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.133 [2024-07-14 21:19:58.646988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:47.133 [2024-07-14 21:19:58.647033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.133 [2024-07-14 21:19:58.653785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.133 [2024-07-14 21:19:58.653838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.133 [2024-07-14 21:19:58.653849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.653859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.133 [2024-07-14 21:19:58.653886] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:47.133 [2024-07-14 21:19:58.653907] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:47.133 [2024-07-14 21:19:58.653918] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:47.133 [2024-07-14 21:19:58.653937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.653947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.653955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.133 [2024-07-14 21:19:58.653971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.133 [2024-07-14 21:19:58.654007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.133 [2024-07-14 21:19:58.654112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.133 [2024-07-14 21:19:58.654129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.133 [2024-07-14 21:19:58.654136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.654147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.133 [2024-07-14 21:19:58.654161] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:47.133 [2024-07-14 21:19:58.654176] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:47.133 [2024-07-14 21:19:58.654190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.654199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.654206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.133 [2024-07-14 21:19:58.654223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.133 [2024-07-14 21:19:58.654257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.133 [2024-07-14 21:19:58.654329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.133 [2024-07-14 21:19:58.654341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.133 [2024-07-14 21:19:58.654347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.654355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.133 [2024-07-14 21:19:58.654365] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:47.133 [2024-07-14 21:19:58.654379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:47.133 [2024-07-14 21:19:58.654392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.133 [2024-07-14 21:19:58.654401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.654422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.134 [2024-07-14 21:19:58.654453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.134 [2024-07-14 21:19:58.654532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.134 [2024-07-14 21:19:58.654543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.134 [2024-07-14 21:19:58.654550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.134 [2024-07-14 21:19:58.654570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:47.134 [2024-07-14 21:19:58.654588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.654631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.134 [2024-07-14 21:19:58.654677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.134 [2024-07-14 21:19:58.654754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.134 [2024-07-14 21:19:58.654766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.134 [2024-07-14 21:19:58.654773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.134 [2024-07-14 21:19:58.654806] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:47.134 [2024-07-14 21:19:58.654816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:47.134 [2024-07-14 21:19:58.654830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:47.134 [2024-07-14 21:19:58.654940] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:47.134 [2024-07-14 21:19:58.654950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:47.134 [2024-07-14 21:19:58.654967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.654991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.655007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.134 [2024-07-14 21:19:58.655038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.134 [2024-07-14 21:19:58.655120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.134 [2024-07-14 21:19:58.655136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.134 [2024-07-14 21:19:58.655144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.134 [2024-07-14 21:19:58.655162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:47.134 [2024-07-14 21:19:58.655180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.655212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.134 [2024-07-14 21:19:58.655240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.134 [2024-07-14 21:19:58.655311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.134 [2024-07-14 21:19:58.655323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.134 [2024-07-14 21:19:58.655330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.134 [2024-07-14 21:19:58.655346] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:47.134 [2024-07-14 21:19:58.655360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:47.134 [2024-07-14 21:19:58.655374] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:47.134 [2024-07-14 21:19:58.655394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:47.134 [2024-07-14 21:19:58.655414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.655442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.134 [2024-07-14 21:19:58.655502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.134 [2024-07-14 21:19:58.655629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.134 [2024-07-14 21:19:58.655642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.134 [2024-07-14 21:19:58.655649] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655658] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:47.134 [2024-07-14 21:19:58.655666] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.134 [2024-07-14 21:19:58.655675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655696] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655705] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.134 [2024-07-14 21:19:58.655731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.134 [2024-07-14 21:19:58.655738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.134 [2024-07-14 21:19:58.655779] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:47.134 [2024-07-14 21:19:58.655790] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:47.134 [2024-07-14 21:19:58.655831] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:47.134 [2024-07-14 21:19:58.655842] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:47.134 [2024-07-14 21:19:58.655851] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:47.134 [2024-07-14 21:19:58.655861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:47.134 [2024-07-14 21:19:58.655881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:47.134 [2024-07-14 21:19:58.655901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.655919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.655939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.134 [2024-07-14 21:19:58.655973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.134 [2024-07-14 21:19:58.656051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.134 [2024-07-14 21:19:58.656063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.134 [2024-07-14 21:19:58.656070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.134 [2024-07-14 21:19:58.656099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.656149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.134 [2024-07-14 21:19:58.656161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.656200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.134 [2024-07-14 21:19:58.656210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.134 [2024-07-14 21:19:58.656223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:47.134 [2024-07-14 21:19:58.656235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.135 [2024-07-14 21:19:58.656246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.656252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.656259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.656269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.135 [2024-07-14 21:19:58.656278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:47.135 [2024-07-14 21:19:58.656300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:47.135 [2024-07-14 21:19:58.656313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.656349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.656380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.135 [2024-07-14 21:19:58.656415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.135 [2024-07-14 21:19:58.656428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:47.135 [2024-07-14 21:19:58.656437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:47.135 [2024-07-14 21:19:58.656449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.135 [2024-07-14 21:19:58.656458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.135 [2024-07-14 21:19:58.656615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.135 [2024-07-14 21:19:58.656628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.135 [2024-07-14 21:19:58.656635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.656643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.135 [2024-07-14 21:19:58.656655] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:47.135 [2024-07-14 21:19:58.656666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:47.135 [2024-07-14 21:19:58.656691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.656721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.656767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.135 [2024-07-14 21:19:58.656797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.135 [2024-07-14 21:19:58.656962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.135 [2024-07-14 21:19:58.656977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.135 [2024-07-14 21:19:58.656985] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.656992] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:47.135 [2024-07-14 21:19:58.657001] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.135 [2024-07-14 21:19:58.657010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657032] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.135 [2024-07-14 21:19:58.657060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.135 [2024-07-14 21:19:58.657067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.135 [2024-07-14 21:19:58.657105] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:47.135 [2024-07-14 21:19:58.657163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.657207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.135 [2024-07-14 21:19:58.657220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.657256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.135 [2024-07-14 21:19:58.657294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.135 [2024-07-14 21:19:58.657311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:47.135 [2024-07-14 21:19:58.657690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.135 [2024-07-14 21:19:58.657719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.135 [2024-07-14 21:19:58.657729] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.657737] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:47.135 [2024-07-14 21:19:58.657745] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:47.135 [2024-07-14 21:19:58.660852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.660892] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.660901] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.660911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.135 [2024-07-14 21:19:58.660922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.135 [2024-07-14 21:19:58.660928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.660936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:47.135 [2024-07-14 21:19:58.660959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.135 [2024-07-14 21:19:58.660971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.135 [2024-07-14 21:19:58.660977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.660983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.135 [2024-07-14 21:19:58.661012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.661042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.135 [2024-07-14 21:19:58.661085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.135 [2024-07-14 21:19:58.661224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.135 [2024-07-14 21:19:58.661237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.135 [2024-07-14 21:19:58.661243] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661250] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:47.135 [2024-07-14 21:19:58.661258] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:47.135 [2024-07-14 21:19:58.661265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661276] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661290] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.135 [2024-07-14 21:19:58.661313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.135 [2024-07-14 21:19:58.661318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.135 [2024-07-14 21:19:58.661346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.135 [2024-07-14 21:19:58.661369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.135 [2024-07-14 21:19:58.661405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.135 [2024-07-14 21:19:58.661518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.135 [2024-07-14 21:19:58.661533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.135 [2024-07-14 21:19:58.661540] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661547] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:47.135 [2024-07-14 21:19:58.661554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:47.135 [2024-07-14 21:19:58.661572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661583] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.135 [2024-07-14 21:19:58.661590] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.135 ===================================================== 00:20:47.135 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:47.135 ===================================================== 00:20:47.135 Controller Capabilities/Features 00:20:47.135 ================================ 00:20:47.135 Vendor ID: 0000 00:20:47.135 Subsystem Vendor ID: 0000 00:20:47.135 Serial Number: .................... 00:20:47.135 Model Number: ........................................ 00:20:47.135 Firmware Version: 24.09 00:20:47.135 Recommended Arb Burst: 0 00:20:47.135 IEEE OUI Identifier: 00 00 00 00:20:47.135 Multi-path I/O 00:20:47.135 May have multiple subsystem ports: No 00:20:47.136 May have multiple controllers: No 00:20:47.136 Associated with SR-IOV VF: No 00:20:47.136 Max Data Transfer Size: 131072 00:20:47.136 Max Number of Namespaces: 0 00:20:47.136 Max Number of I/O Queues: 1024 00:20:47.136 NVMe Specification Version (VS): 1.3 00:20:47.136 NVMe Specification Version (Identify): 1.3 00:20:47.136 Maximum Queue Entries: 128 00:20:47.136 Contiguous Queues Required: Yes 00:20:47.136 Arbitration Mechanisms Supported 00:20:47.136 Weighted Round Robin: Not Supported 00:20:47.136 Vendor Specific: Not Supported 00:20:47.136 Reset Timeout: 15000 ms 00:20:47.136 Doorbell Stride: 4 bytes 00:20:47.136 NVM Subsystem Reset: Not Supported 00:20:47.136 Command Sets Supported 00:20:47.136 NVM Command Set: Supported 00:20:47.136 Boot Partition: Not Supported 00:20:47.136 Memory Page Size Minimum: 4096 bytes 00:20:47.136 Memory Page Size Maximum: 4096 bytes 00:20:47.136 Persistent Memory Region: Not Supported 00:20:47.136 Optional Asynchronous Events Supported 00:20:47.136 Namespace Attribute Notices: Not Supported 00:20:47.136 Firmware Activation Notices: Not Supported 00:20:47.136 ANA Change Notices: Not Supported 00:20:47.136 PLE Aggregate Log Change Notices: Not Supported 00:20:47.136 LBA Status Info Alert Notices: Not Supported 00:20:47.136 EGE Aggregate Log Change Notices: Not Supported 00:20:47.136 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.136 Zone Descriptor Change Notices: Not Supported 00:20:47.136 Discovery Log Change Notices: Supported 00:20:47.136 Controller Attributes 00:20:47.136 128-bit Host Identifier: Not Supported 00:20:47.136 Non-Operational Permissive Mode: Not Supported 00:20:47.136 NVM Sets: Not Supported 00:20:47.136 Read Recovery Levels: Not Supported 00:20:47.136 Endurance Groups: Not Supported 00:20:47.136 Predictable Latency Mode: Not Supported 00:20:47.136 Traffic Based Keep ALive: Not Supported 00:20:47.136 Namespace Granularity: Not Supported 00:20:47.136 SQ Associations: Not Supported 00:20:47.136 UUID List: Not Supported 00:20:47.136 Multi-Domain Subsystem: Not Supported 00:20:47.136 Fixed Capacity Management: Not Supported 00:20:47.136 Variable Capacity Management: Not Supported 00:20:47.136 Delete Endurance Group: Not Supported 00:20:47.136 Delete NVM Set: Not Supported 00:20:47.136 Extended LBA Formats Supported: Not Supported 00:20:47.136 Flexible Data Placement Supported: Not Supported 00:20:47.136 00:20:47.136 Controller Memory Buffer Support 00:20:47.136 ================================ 00:20:47.136 Supported: No 00:20:47.136 00:20:47.136 Persistent Memory Region Support 00:20:47.136 ================================ 00:20:47.136 Supported: No 00:20:47.136 00:20:47.136 Admin Command Set Attributes 00:20:47.136 ============================ 00:20:47.136 Security Send/Receive: Not Supported 00:20:47.136 Format NVM: Not Supported 00:20:47.136 Firmware Activate/Download: Not Supported 00:20:47.136 Namespace Management: Not Supported 00:20:47.136 Device Self-Test: Not Supported 00:20:47.136 Directives: Not Supported 00:20:47.136 NVMe-MI: Not Supported 00:20:47.136 Virtualization Management: Not Supported 00:20:47.136 Doorbell Buffer Config: Not Supported 00:20:47.136 Get LBA Status Capability: Not Supported 00:20:47.136 Command & Feature Lockdown Capability: Not Supported 00:20:47.136 Abort Command Limit: 1 00:20:47.136 Async Event Request Limit: 4 00:20:47.136 Number of Firmware Slots: N/A 00:20:47.136 Firmware Slot 1 Read-Only: N/A 00:20:47.136 Firmware Activation Without Reset: N/A 00:20:47.136 Multiple Update Detection Support: N/A 00:20:47.136 Firmware Update Granularity: No Information Provided 00:20:47.136 Per-Namespace SMART Log: No 00:20:47.136 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.136 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:47.136 Command Effects Log Page: Not Supported 00:20:47.136 Get Log Page Extended Data: Supported 00:20:47.136 Telemetry Log Pages: Not Supported 00:20:47.136 Persistent Event Log Pages: Not Supported 00:20:47.136 Supported Log Pages Log Page: May Support 00:20:47.136 Commands Supported & Effects Log Page: Not Supported 00:20:47.136 Feature Identifiers & Effects Log Page:May Support 00:20:47.136 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.136 Data Area 4 for Telemetry Log: Not Supported 00:20:47.136 Error Log Page Entries Supported: 128 00:20:47.136 Keep Alive: Not Supported 00:20:47.136 00:20:47.136 NVM Command Set Attributes 00:20:47.136 ========================== 00:20:47.136 Submission Queue Entry Size 00:20:47.136 Max: 1 00:20:47.136 Min: 1 00:20:47.136 Completion Queue Entry Size 00:20:47.136 Max: 1 00:20:47.136 Min: 1 00:20:47.136 Number of Namespaces: 0 00:20:47.136 Compare Command: Not Supported 00:20:47.136 Write Uncorrectable Command: Not Supported 00:20:47.136 Dataset Management Command: Not Supported 00:20:47.136 Write Zeroes Command: Not Supported 00:20:47.136 Set Features Save Field: Not Supported 00:20:47.136 Reservations: Not Supported 00:20:47.136 Timestamp: Not Supported 00:20:47.136 Copy: Not Supported 00:20:47.136 Volatile Write Cache: Not Present 00:20:47.136 Atomic Write Unit (Normal): 1 00:20:47.136 Atomic Write Unit (PFail): 1 00:20:47.136 Atomic Compare & Write Unit: 1 00:20:47.136 Fused Compare & Write: Supported 00:20:47.136 Scatter-Gather List 00:20:47.136 SGL Command Set: Supported 00:20:47.136 SGL Keyed: Supported 00:20:47.136 SGL Bit Bucket Descriptor: Not Supported 00:20:47.136 SGL Metadata Pointer: Not Supported 00:20:47.136 Oversized SGL: Not Supported 00:20:47.136 SGL Metadata Address: Not Supported 00:20:47.136 SGL Offset: Supported 00:20:47.136 Transport SGL Data Block: Not Supported 00:20:47.136 Replay Protected Memory Block: Not Supported 00:20:47.136 00:20:47.136 Firmware Slot Information 00:20:47.136 ========================= 00:20:47.136 Active slot: 0 00:20:47.136 00:20:47.136 00:20:47.136 Error Log 00:20:47.136 ========= 00:20:47.136 00:20:47.136 Active Namespaces 00:20:47.136 ================= 00:20:47.136 Discovery Log Page 00:20:47.136 ================== 00:20:47.136 Generation Counter: 2 00:20:47.136 Number of Records: 2 00:20:47.136 Record Format: 0 00:20:47.136 00:20:47.136 Discovery Log Entry 0 00:20:47.136 ---------------------- 00:20:47.136 Transport Type: 3 (TCP) 00:20:47.136 Address Family: 1 (IPv4) 00:20:47.136 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:47.136 Entry Flags: 00:20:47.136 Duplicate Returned Information: 1 00:20:47.136 Explicit Persistent Connection Support for Discovery: 1 00:20:47.136 Transport Requirements: 00:20:47.136 Secure Channel: Not Required 00:20:47.136 Port ID: 0 (0x0000) 00:20:47.136 Controller ID: 65535 (0xffff) 00:20:47.136 Admin Max SQ Size: 128 00:20:47.136 Transport Service Identifier: 4420 00:20:47.136 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:47.136 Transport Address: 10.0.0.2 00:20:47.136 Discovery Log Entry 1 00:20:47.136 ---------------------- 00:20:47.136 Transport Type: 3 (TCP) 00:20:47.136 Address Family: 1 (IPv4) 00:20:47.136 Subsystem Type: 2 (NVM Subsystem) 00:20:47.136 Entry Flags: 00:20:47.136 Duplicate Returned Information: 0 00:20:47.136 Explicit Persistent Connection Support for Discovery: 0 00:20:47.136 Transport Requirements: 00:20:47.136 Secure Channel: Not Required 00:20:47.136 Port ID: 0 (0x0000) 00:20:47.136 Controller ID: 65535 (0xffff) 00:20:47.137 Admin Max SQ Size: 128 00:20:47.137 Transport Service Identifier: 4420 00:20:47.137 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:47.137 Transport Address: 10.0.0.2 [2024-07-14 21:19:58.661618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.661631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.661638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.661644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.661811] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:47.137 [2024-07-14 21:19:58.661834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.661850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.137 [2024-07-14 21:19:58.661860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.661870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.137 [2024-07-14 21:19:58.661878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.661887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.137 [2024-07-14 21:19:58.661894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.661904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.137 [2024-07-14 21:19:58.661923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.661952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.661960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.661975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.662010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.662080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.662093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.662107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.662131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.662180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.662215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.662314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.662327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.662333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.662349] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:47.137 [2024-07-14 21:19:58.662362] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:47.137 [2024-07-14 21:19:58.662381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.662434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.662463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.662533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.662545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.662557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.662584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.662612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.662639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.662719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.662733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.662739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.662764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.662792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.662855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.662931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.662944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.662950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.662976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.662996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.663009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.663037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.663123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.663140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.663147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.663173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.663216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.663242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.663313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.663325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.663331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.663355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.663382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.663408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.663482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.663494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.137 [2024-07-14 21:19:58.663500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.137 [2024-07-14 21:19:58.663524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.137 [2024-07-14 21:19:58.663543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.137 [2024-07-14 21:19:58.663555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.137 [2024-07-14 21:19:58.663582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.137 [2024-07-14 21:19:58.663652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.137 [2024-07-14 21:19:58.663669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.138 [2024-07-14 21:19:58.663676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.663683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.138 [2024-07-14 21:19:58.663700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.663709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.663715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.138 [2024-07-14 21:19:58.663727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.138 [2024-07-14 21:19:58.663753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.138 [2024-07-14 21:19:58.663841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.138 [2024-07-14 21:19:58.663855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.138 [2024-07-14 21:19:58.663862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.663868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.138 [2024-07-14 21:19:58.663887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.663897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.663903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.138 [2024-07-14 21:19:58.663919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.138 [2024-07-14 21:19:58.663948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.138 [2024-07-14 21:19:58.664019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.138 [2024-07-14 21:19:58.664030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.138 [2024-07-14 21:19:58.664037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.664044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.138 [2024-07-14 21:19:58.664060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.664073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.664080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.138 [2024-07-14 21:19:58.664092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.138 [2024-07-14 21:19:58.664118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.138 [2024-07-14 21:19:58.664180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.138 [2024-07-14 21:19:58.664191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.138 [2024-07-14 21:19:58.664197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.664207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.138 [2024-07-14 21:19:58.664225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.664245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.138 [2024-07-14 21:19:58.664252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.138 [2024-07-14 21:19:58.664264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.138 [2024-07-14 21:19:58.664294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.139 [2024-07-14 21:19:58.664396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.139 [2024-07-14 21:19:58.664411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.139 [2024-07-14 21:19:58.664418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.664426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.139 [2024-07-14 21:19:58.664445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.664456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.664463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.139 [2024-07-14 21:19:58.664477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.139 [2024-07-14 21:19:58.664507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.139 [2024-07-14 21:19:58.664577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.139 [2024-07-14 21:19:58.664591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.139 [2024-07-14 21:19:58.664597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.664605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.139 [2024-07-14 21:19:58.664624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.664634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.664642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.139 [2024-07-14 21:19:58.664671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.139 [2024-07-14 21:19:58.664727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.139 [2024-07-14 21:19:58.664806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.139 [2024-07-14 21:19:58.668837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.139 [2024-07-14 21:19:58.668851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.668859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.139 [2024-07-14 21:19:58.668882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.668900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.668908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.139 [2024-07-14 21:19:58.668923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.139 [2024-07-14 21:19:58.668957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.139 [2024-07-14 21:19:58.669043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.139 [2024-07-14 21:19:58.669059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.139 [2024-07-14 21:19:58.669067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.139 [2024-07-14 21:19:58.669074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.139 [2024-07-14 21:19:58.669088] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:47.399 00:20:47.399 21:19:58 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:47.399 [2024-07-14 21:19:58.782925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:47.399 [2024-07-14 21:19:58.783072] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80019 ] 00:20:47.661 [2024-07-14 21:19:58.953487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:47.661 [2024-07-14 21:19:58.953627] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:47.661 [2024-07-14 21:19:58.953642] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:47.661 [2024-07-14 21:19:58.953669] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:47.661 [2024-07-14 21:19:58.953685] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:47.661 [2024-07-14 21:19:58.953885] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:47.661 [2024-07-14 21:19:58.953953] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:47.662 [2024-07-14 21:19:58.966780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:47.662 [2024-07-14 21:19:58.966834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:47.662 [2024-07-14 21:19:58.966849] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:47.662 [2024-07-14 21:19:58.966862] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:47.662 [2024-07-14 21:19:58.966941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.966956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.966965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.966989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:47.662 [2024-07-14 21:19:58.967030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.972840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.972888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.972897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.972912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.972936] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:47.662 [2024-07-14 21:19:58.972962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:47.662 [2024-07-14 21:19:58.972975] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:47.662 [2024-07-14 21:19:58.972993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.973027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.973064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.973135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.973148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.973158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.973179] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:47.662 [2024-07-14 21:19:58.973194] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:47.662 [2024-07-14 21:19:58.973207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.973240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.973269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.973330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.973341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.973347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.973364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:47.662 [2024-07-14 21:19:58.973382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:47.662 [2024-07-14 21:19:58.973397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.973427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.973453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.973510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.973524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.973530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.973547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:47.662 [2024-07-14 21:19:58.973563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.973596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.973623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.973686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.973697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.973704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.973720] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:47.662 [2024-07-14 21:19:58.973739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:47.662 [2024-07-14 21:19:58.973753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:47.662 [2024-07-14 21:19:58.973864] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:47.662 [2024-07-14 21:19:58.973874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:47.662 [2024-07-14 21:19:58.973890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.973910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.973927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.973962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.974027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.974039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.974045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.974066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:47.662 [2024-07-14 21:19:58.974084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.974113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.974139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.974205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.974217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.974223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.974238] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:47.662 [2024-07-14 21:19:58.974248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:47.662 [2024-07-14 21:19:58.974261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:47.662 [2024-07-14 21:19:58.974282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:47.662 [2024-07-14 21:19:58.974302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.662 [2024-07-14 21:19:58.974325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.662 [2024-07-14 21:19:58.974367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.662 [2024-07-14 21:19:58.974484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.662 [2024-07-14 21:19:58.974497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.662 [2024-07-14 21:19:58.974506] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974515] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:47.662 [2024-07-14 21:19:58.974524] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.662 [2024-07-14 21:19:58.974532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974548] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974556] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.662 [2024-07-14 21:19:58.974578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.662 [2024-07-14 21:19:58.974584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.662 [2024-07-14 21:19:58.974591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.662 [2024-07-14 21:19:58.974608] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:47.662 [2024-07-14 21:19:58.974622] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:47.663 [2024-07-14 21:19:58.974632] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:47.663 [2024-07-14 21:19:58.974641] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:47.663 [2024-07-14 21:19:58.974649] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:47.663 [2024-07-14 21:19:58.974657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.974674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.974689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.974722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.663 [2024-07-14 21:19:58.974767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.663 [2024-07-14 21:19:58.974835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.663 [2024-07-14 21:19:58.974846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.663 [2024-07-14 21:19:58.974852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.663 [2024-07-14 21:19:58.974872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.974904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.663 [2024-07-14 21:19:58.974916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.974942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.663 [2024-07-14 21:19:58.974952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.974974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.663 [2024-07-14 21:19:58.974983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.974996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.975006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.663 [2024-07-14 21:19:58.975014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.975071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.663 [2024-07-14 21:19:58.975102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:47.663 [2024-07-14 21:19:58.975113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:47.663 [2024-07-14 21:19:58.975120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:47.663 [2024-07-14 21:19:58.975127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.663 [2024-07-14 21:19:58.975134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.663 [2024-07-14 21:19:58.975236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.663 [2024-07-14 21:19:58.975248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.663 [2024-07-14 21:19:58.975254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.663 [2024-07-14 21:19:58.975270] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:47.663 [2024-07-14 21:19:58.975280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.975350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.663 [2024-07-14 21:19:58.975377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.663 [2024-07-14 21:19:58.975433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.663 [2024-07-14 21:19:58.975444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.663 [2024-07-14 21:19:58.975452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.663 [2024-07-14 21:19:58.975544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.975612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.663 [2024-07-14 21:19:58.975643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.663 [2024-07-14 21:19:58.975769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.663 [2024-07-14 21:19:58.975782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.663 [2024-07-14 21:19:58.975788] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975795] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:47.663 [2024-07-14 21:19:58.975803] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.663 [2024-07-14 21:19:58.975810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975827] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975835] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.663 [2024-07-14 21:19:58.975860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.663 [2024-07-14 21:19:58.975866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.663 [2024-07-14 21:19:58.975910] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:47.663 [2024-07-14 21:19:58.975929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.975972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.975981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.976002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.663 [2024-07-14 21:19:58.976036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.663 [2024-07-14 21:19:58.976131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.663 [2024-07-14 21:19:58.976142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.663 [2024-07-14 21:19:58.976148] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976155] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:47.663 [2024-07-14 21:19:58.976162] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.663 [2024-07-14 21:19:58.976169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976180] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976187] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.663 [2024-07-14 21:19:58.976209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.663 [2024-07-14 21:19:58.976217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.663 [2024-07-14 21:19:58.976257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.976279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:47.663 [2024-07-14 21:19:58.976297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.663 [2024-07-14 21:19:58.976329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.663 [2024-07-14 21:19:58.976393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.663 [2024-07-14 21:19:58.976477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.663 [2024-07-14 21:19:58.976489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.663 [2024-07-14 21:19:58.976496] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.663 [2024-07-14 21:19:58.976502] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:47.663 [2024-07-14 21:19:58.976511] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.664 [2024-07-14 21:19:58.976518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.976530] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.976537] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.976553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.976563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.976569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.976576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.664 [2024-07-14 21:19:58.976608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976705] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:47.664 [2024-07-14 21:19:58.976713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:47.664 [2024-07-14 21:19:58.976723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:47.664 [2024-07-14 21:19:58.976765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.976792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.976807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.980868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.980884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.980891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.980906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.664 [2024-07-14 21:19:58.980952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.664 [2024-07-14 21:19:58.980965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:47.664 [2024-07-14 21:19:58.981050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.981063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.981069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.664 [2024-07-14 21:19:58.981090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.981099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.981105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:47.664 [2024-07-14 21:19:58.981135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:47.664 [2024-07-14 21:19:58.981245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.981256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.981262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:47.664 [2024-07-14 21:19:58.981285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:47.664 [2024-07-14 21:19:58.981387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.981398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.981403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:47.664 [2024-07-14 21:19:58.981426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:47.664 [2024-07-14 21:19:58.981535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.981545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.981551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:47.664 [2024-07-14 21:19:58.981591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:47.664 [2024-07-14 21:19:58.981727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.664 [2024-07-14 21:19:58.981769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:47.664 [2024-07-14 21:19:58.981782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:47.664 [2024-07-14 21:19:58.981790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:47.664 [2024-07-14 21:19:58.981797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:47.664 [2024-07-14 21:19:58.981971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.664 [2024-07-14 21:19:58.981984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.664 [2024-07-14 21:19:58.981990] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.981997] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:47.664 [2024-07-14 21:19:58.982010] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:47.664 [2024-07-14 21:19:58.982017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982047] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982057] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.664 [2024-07-14 21:19:58.982081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.664 [2024-07-14 21:19:58.982086] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982092] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:47.664 [2024-07-14 21:19:58.982100] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:47.664 [2024-07-14 21:19:58.982106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982116] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982134] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.664 [2024-07-14 21:19:58.982155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.664 [2024-07-14 21:19:58.982160] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982167] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:47.664 [2024-07-14 21:19:58.982174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:47.664 [2024-07-14 21:19:58.982180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982192] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982199] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:47.664 [2024-07-14 21:19:58.982219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:47.664 [2024-07-14 21:19:58.982225] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982231] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:47.664 [2024-07-14 21:19:58.982238] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:47.664 [2024-07-14 21:19:58.982245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982255] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982261] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:47.664 [2024-07-14 21:19:58.982269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.664 [2024-07-14 21:19:58.982277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.664 [2024-07-14 21:19:58.982283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.665 [2024-07-14 21:19:58.982290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:47.665 [2024-07-14 21:19:58.982319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.665 [2024-07-14 21:19:58.982334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.665 [2024-07-14 21:19:58.982340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.665 [2024-07-14 21:19:58.982346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:47.665 [2024-07-14 21:19:58.982361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.665 ===================================================== 00:20:47.665 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.665 ===================================================== 00:20:47.665 Controller Capabilities/Features 00:20:47.665 ================================ 00:20:47.665 Vendor ID: 8086 00:20:47.665 Subsystem Vendor ID: 8086 00:20:47.665 Serial Number: SPDK00000000000001 00:20:47.665 Model Number: SPDK bdev Controller 00:20:47.665 Firmware Version: 24.09 00:20:47.665 Recommended Arb Burst: 6 00:20:47.665 IEEE OUI Identifier: e4 d2 5c 00:20:47.665 Multi-path I/O 00:20:47.665 May have multiple subsystem ports: Yes 00:20:47.665 May have multiple controllers: Yes 00:20:47.665 Associated with SR-IOV VF: No 00:20:47.665 Max Data Transfer Size: 131072 00:20:47.665 Max Number of Namespaces: 32 00:20:47.665 Max Number of I/O Queues: 127 00:20:47.665 NVMe Specification Version (VS): 1.3 00:20:47.665 NVMe Specification Version (Identify): 1.3 00:20:47.665 Maximum Queue Entries: 128 00:20:47.665 Contiguous Queues Required: Yes 00:20:47.665 Arbitration Mechanisms Supported 00:20:47.665 Weighted Round Robin: Not Supported 00:20:47.665 Vendor Specific: Not Supported 00:20:47.665 Reset Timeout: 15000 ms 00:20:47.665 Doorbell Stride: 4 bytes 00:20:47.665 NVM Subsystem Reset: Not Supported 00:20:47.665 Command Sets Supported 00:20:47.665 NVM Command Set: Supported 00:20:47.665 Boot Partition: Not Supported 00:20:47.665 Memory Page Size Minimum: 4096 bytes 00:20:47.665 Memory Page Size Maximum: 4096 bytes 00:20:47.665 Persistent Memory Region: Not Supported 00:20:47.665 Optional Asynchronous Events Supported 00:20:47.665 Namespace Attribute Notices: Supported 00:20:47.665 Firmware Activation Notices: Not Supported 00:20:47.665 ANA Change Notices: Not Supported 00:20:47.665 PLE Aggregate Log Change Notices: Not Supported 00:20:47.665 LBA Status Info Alert Notices: Not Supported 00:20:47.665 EGE Aggregate Log Change Notices: Not Supported 00:20:47.665 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.665 Zone Descriptor Change Notices: Not Supported 00:20:47.665 Discovery Log Change Notices: Not Supported 00:20:47.665 Controller Attributes 00:20:47.665 128-bit Host Identifier: Supported 00:20:47.665 Non-Operational Permissive Mode: Not Supported 00:20:47.665 NVM Sets: Not Supported 00:20:47.665 Read Recovery Levels: Not Supported 00:20:47.665 Endurance Groups: Not Supported 00:20:47.665 Predictable Latency Mode: Not Supported 00:20:47.665 Traffic Based Keep ALive: Not Supported 00:20:47.665 Namespace Granularity: Not Supported 00:20:47.665 SQ Associations: Not Supported 00:20:47.665 UUID List: Not Supported 00:20:47.665 Multi-Domain Subsystem: Not Supported 00:20:47.665 Fixed Capacity Management: Not Supported 00:20:47.665 Variable Capacity Management: Not Supported 00:20:47.665 Delete Endurance Group: Not Supported 00:20:47.665 Delete NVM Set: Not Supported 00:20:47.665 Extended LBA Formats Supported: Not Supported 00:20:47.665 Flexible Data Placement Supported: Not Supported 00:20:47.665 00:20:47.665 Controller Memory Buffer Support 00:20:47.665 ================================ 00:20:47.665 Supported: No 00:20:47.665 00:20:47.665 Persistent Memory Region Support 00:20:47.665 ================================ 00:20:47.665 Supported: No 00:20:47.665 00:20:47.665 Admin Command Set Attributes 00:20:47.665 ============================ 00:20:47.665 Security Send/Receive: Not Supported 00:20:47.665 Format NVM: Not Supported 00:20:47.665 Firmware Activate/Download: Not Supported 00:20:47.665 Namespace Management: Not Supported 00:20:47.665 Device Self-Test: Not Supported 00:20:47.665 Directives: Not Supported 00:20:47.665 NVMe-MI: Not Supported 00:20:47.665 Virtualization Management: Not Supported 00:20:47.665 Doorbell Buffer Config: Not Supported 00:20:47.665 Get LBA Status Capability: Not Supported 00:20:47.665 Command & Feature Lockdown Capability: Not Supported 00:20:47.665 Abort Command Limit: 4 00:20:47.665 Async Event Request Limit: 4 00:20:47.665 Number of Firmware Slots: N/A 00:20:47.665 Firmware Slot 1 Read-Only: N/A 00:20:47.665 Firmware Activation Without Reset: [2024-07-14 21:19:58.982371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.665 [2024-07-14 21:19:58.982377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.665 [2024-07-14 21:19:58.982383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:47.665 [2024-07-14 21:19:58.982395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.665 [2024-07-14 21:19:58.982404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.665 [2024-07-14 21:19:58.982410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.665 [2024-07-14 21:19:58.982418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:47.665 N/A 00:20:47.665 Multiple Update Detection Support: N/A 00:20:47.665 Firmware Update Granularity: No Information Provided 00:20:47.665 Per-Namespace SMART Log: No 00:20:47.665 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.665 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:47.665 Command Effects Log Page: Supported 00:20:47.665 Get Log Page Extended Data: Supported 00:20:47.665 Telemetry Log Pages: Not Supported 00:20:47.665 Persistent Event Log Pages: Not Supported 00:20:47.665 Supported Log Pages Log Page: May Support 00:20:47.665 Commands Supported & Effects Log Page: Not Supported 00:20:47.665 Feature Identifiers & Effects Log Page:May Support 00:20:47.665 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.665 Data Area 4 for Telemetry Log: Not Supported 00:20:47.665 Error Log Page Entries Supported: 128 00:20:47.665 Keep Alive: Supported 00:20:47.665 Keep Alive Granularity: 10000 ms 00:20:47.665 00:20:47.665 NVM Command Set Attributes 00:20:47.665 ========================== 00:20:47.665 Submission Queue Entry Size 00:20:47.665 Max: 64 00:20:47.665 Min: 64 00:20:47.665 Completion Queue Entry Size 00:20:47.665 Max: 16 00:20:47.665 Min: 16 00:20:47.665 Number of Namespaces: 32 00:20:47.665 Compare Command: Supported 00:20:47.665 Write Uncorrectable Command: Not Supported 00:20:47.665 Dataset Management Command: Supported 00:20:47.665 Write Zeroes Command: Supported 00:20:47.665 Set Features Save Field: Not Supported 00:20:47.665 Reservations: Supported 00:20:47.665 Timestamp: Not Supported 00:20:47.665 Copy: Supported 00:20:47.665 Volatile Write Cache: Present 00:20:47.665 Atomic Write Unit (Normal): 1 00:20:47.665 Atomic Write Unit (PFail): 1 00:20:47.665 Atomic Compare & Write Unit: 1 00:20:47.665 Fused Compare & Write: Supported 00:20:47.665 Scatter-Gather List 00:20:47.665 SGL Command Set: Supported 00:20:47.665 SGL Keyed: Supported 00:20:47.665 SGL Bit Bucket Descriptor: Not Supported 00:20:47.665 SGL Metadata Pointer: Not Supported 00:20:47.665 Oversized SGL: Not Supported 00:20:47.665 SGL Metadata Address: Not Supported 00:20:47.665 SGL Offset: Supported 00:20:47.665 Transport SGL Data Block: Not Supported 00:20:47.665 Replay Protected Memory Block: Not Supported 00:20:47.665 00:20:47.665 Firmware Slot Information 00:20:47.665 ========================= 00:20:47.665 Active slot: 1 00:20:47.665 Slot 1 Firmware Revision: 24.09 00:20:47.665 00:20:47.665 00:20:47.665 Commands Supported and Effects 00:20:47.665 ============================== 00:20:47.665 Admin Commands 00:20:47.665 -------------- 00:20:47.665 Get Log Page (02h): Supported 00:20:47.665 Identify (06h): Supported 00:20:47.665 Abort (08h): Supported 00:20:47.665 Set Features (09h): Supported 00:20:47.665 Get Features (0Ah): Supported 00:20:47.665 Asynchronous Event Request (0Ch): Supported 00:20:47.665 Keep Alive (18h): Supported 00:20:47.665 I/O Commands 00:20:47.665 ------------ 00:20:47.665 Flush (00h): Supported LBA-Change 00:20:47.665 Write (01h): Supported LBA-Change 00:20:47.665 Read (02h): Supported 00:20:47.665 Compare (05h): Supported 00:20:47.665 Write Zeroes (08h): Supported LBA-Change 00:20:47.665 Dataset Management (09h): Supported LBA-Change 00:20:47.665 Copy (19h): Supported LBA-Change 00:20:47.665 00:20:47.665 Error Log 00:20:47.665 ========= 00:20:47.665 00:20:47.665 Arbitration 00:20:47.665 =========== 00:20:47.665 Arbitration Burst: 1 00:20:47.665 00:20:47.665 Power Management 00:20:47.665 ================ 00:20:47.665 Number of Power States: 1 00:20:47.665 Current Power State: Power State #0 00:20:47.665 Power State #0: 00:20:47.665 Max Power: 0.00 W 00:20:47.665 Non-Operational State: Operational 00:20:47.665 Entry Latency: Not Reported 00:20:47.665 Exit Latency: Not Reported 00:20:47.665 Relative Read Throughput: 0 00:20:47.665 Relative Read Latency: 0 00:20:47.665 Relative Write Throughput: 0 00:20:47.665 Relative Write Latency: 0 00:20:47.665 Idle Power: Not Reported 00:20:47.665 Active Power: Not Reported 00:20:47.665 Non-Operational Permissive Mode: Not Supported 00:20:47.665 00:20:47.666 Health Information 00:20:47.666 ================== 00:20:47.666 Critical Warnings: 00:20:47.666 Available Spare Space: OK 00:20:47.666 Temperature: OK 00:20:47.666 Device Reliability: OK 00:20:47.666 Read Only: No 00:20:47.666 Volatile Memory Backup: OK 00:20:47.666 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:47.666 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:47.666 Available Spare: 0% 00:20:47.666 Available Spare Threshold: 0% 00:20:47.666 Life Percentage Used:[2024-07-14 21:19:58.982592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.982605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.982619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.982655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:47.666 [2024-07-14 21:19:58.982725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.982740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.982747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.982768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.982844] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:47.666 [2024-07-14 21:19:58.982864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.982877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.666 [2024-07-14 21:19:58.982886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.982895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.666 [2024-07-14 21:19:58.982903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.982922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.666 [2024-07-14 21:19:58.982930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.666 [2024-07-14 21:19:58.982959] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.982967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.982974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.982988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.983028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.983086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.983101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.983108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.983129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.983158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.983195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.983287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.983298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.983304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.983320] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:47.666 [2024-07-14 21:19:58.983328] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:47.666 [2024-07-14 21:19:58.983344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.983376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.983402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.983461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.983472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.983478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.983505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.983532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.983557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.983625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.983639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.983645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.983668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.983695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.983719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.983815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.983829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.983836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.983860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.983875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.983891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.983919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.983982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.983993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.983999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.984006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.666 [2024-07-14 21:19:58.984023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.984031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.984037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.666 [2024-07-14 21:19:58.984053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.666 [2024-07-14 21:19:58.984081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.666 [2024-07-14 21:19:58.984153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.666 [2024-07-14 21:19:58.984165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.666 [2024-07-14 21:19:58.984171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.666 [2024-07-14 21:19:58.984178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.667 [2024-07-14 21:19:58.984197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.667 [2024-07-14 21:19:58.984224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.667 [2024-07-14 21:19:58.984248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.667 [2024-07-14 21:19:58.984305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.667 [2024-07-14 21:19:58.984316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.667 [2024-07-14 21:19:58.984347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.667 [2024-07-14 21:19:58.984395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.667 [2024-07-14 21:19:58.984423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.667 [2024-07-14 21:19:58.984450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.667 [2024-07-14 21:19:58.984508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.667 [2024-07-14 21:19:58.984523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.667 [2024-07-14 21:19:58.984530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.667 [2024-07-14 21:19:58.984554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.667 [2024-07-14 21:19:58.984582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.667 [2024-07-14 21:19:58.984608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.667 [2024-07-14 21:19:58.984684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.667 [2024-07-14 21:19:58.984695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.667 [2024-07-14 21:19:58.984701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.667 [2024-07-14 21:19:58.984740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.984754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.667 [2024-07-14 21:19:58.984766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.667 [2024-07-14 21:19:58.984792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.667 [2024-07-14 21:19:58.988846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.667 [2024-07-14 21:19:58.988876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.667 [2024-07-14 21:19:58.988885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.988892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.667 [2024-07-14 21:19:58.988918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.988927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.988933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:47.667 [2024-07-14 21:19:58.988948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.667 [2024-07-14 21:19:58.988981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:47.667 [2024-07-14 21:19:58.989051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:47.667 [2024-07-14 21:19:58.989062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:47.667 [2024-07-14 21:19:58.989068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:47.667 [2024-07-14 21:19:58.989075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:47.667 [2024-07-14 21:19:58.989088] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:47.667 0% 00:20:47.667 Data Units Read: 0 00:20:47.667 Data Units Written: 0 00:20:47.667 Host Read Commands: 0 00:20:47.667 Host Write Commands: 0 00:20:47.667 Controller Busy Time: 0 minutes 00:20:47.667 Power Cycles: 0 00:20:47.667 Power On Hours: 0 hours 00:20:47.667 Unsafe Shutdowns: 0 00:20:47.667 Unrecoverable Media Errors: 0 00:20:47.667 Lifetime Error Log Entries: 0 00:20:47.667 Warning Temperature Time: 0 minutes 00:20:47.667 Critical Temperature Time: 0 minutes 00:20:47.667 00:20:47.667 Number of Queues 00:20:47.667 ================ 00:20:47.667 Number of I/O Submission Queues: 127 00:20:47.667 Number of I/O Completion Queues: 127 00:20:47.667 00:20:47.667 Active Namespaces 00:20:47.667 ================= 00:20:47.667 Namespace ID:1 00:20:47.667 Error Recovery Timeout: Unlimited 00:20:47.667 Command Set Identifier: NVM (00h) 00:20:47.667 Deallocate: Supported 00:20:47.667 Deallocated/Unwritten Error: Not Supported 00:20:47.667 Deallocated Read Value: Unknown 00:20:47.667 Deallocate in Write Zeroes: Not Supported 00:20:47.667 Deallocated Guard Field: 0xFFFF 00:20:47.667 Flush: Supported 00:20:47.667 Reservation: Supported 00:20:47.667 Namespace Sharing Capabilities: Multiple Controllers 00:20:47.667 Size (in LBAs): 131072 (0GiB) 00:20:47.667 Capacity (in LBAs): 131072 (0GiB) 00:20:47.667 Utilization (in LBAs): 131072 (0GiB) 00:20:47.667 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:47.667 EUI64: ABCDEF0123456789 00:20:47.667 UUID: 4088c33a-b3d5-4383-b3c0-8e762a93f865 00:20:47.667 Thin Provisioning: Not Supported 00:20:47.667 Per-NS Atomic Units: Yes 00:20:47.667 Atomic Boundary Size (Normal): 0 00:20:47.667 Atomic Boundary Size (PFail): 0 00:20:47.667 Atomic Boundary Offset: 0 00:20:47.667 Maximum Single Source Range Length: 65535 00:20:47.667 Maximum Copy Length: 65535 00:20:47.667 Maximum Source Range Count: 1 00:20:47.667 NGUID/EUI64 Never Reused: No 00:20:47.667 Namespace Write Protected: No 00:20:47.667 Number of LBA Formats: 1 00:20:47.667 Current LBA Format: LBA Format #00 00:20:47.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.667 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.667 rmmod nvme_tcp 00:20:47.667 rmmod nvme_fabrics 00:20:47.667 rmmod nvme_keyring 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 79975 ']' 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 79975 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 79975 ']' 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 79975 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79975 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:47.667 killing process with pid 79975 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79975' 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 79975 00:20:47.667 21:19:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 79975 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:49.045 00:20:49.045 real 0m3.706s 00:20:49.045 user 0m10.164s 00:20:49.045 sys 0m0.859s 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.045 21:20:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.045 ************************************ 00:20:49.045 END TEST nvmf_identify 00:20:49.045 ************************************ 00:20:49.045 21:20:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:49.045 21:20:00 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:49.045 21:20:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:49.045 21:20:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.045 21:20:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:49.045 ************************************ 00:20:49.045 START TEST nvmf_perf 00:20:49.045 ************************************ 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:49.045 * Looking for test storage... 00:20:49.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:49.045 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:49.305 Cannot find device "nvmf_tgt_br" 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.305 Cannot find device "nvmf_tgt_br2" 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:49.305 Cannot find device "nvmf_tgt_br" 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:49.305 Cannot find device "nvmf_tgt_br2" 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:49.305 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:49.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:20:49.565 00:20:49.565 --- 10.0.0.2 ping statistics --- 00:20:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.565 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:49.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:49.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:49.565 00:20:49.565 --- 10.0.0.3 ping statistics --- 00:20:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.565 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:49.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:49.565 00:20:49.565 --- 10.0.0.1 ping statistics --- 00:20:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.565 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=80199 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 80199 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 80199 ']' 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.565 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.566 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.566 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.566 21:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:49.566 [2024-07-14 21:20:01.056802] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:49.566 [2024-07-14 21:20:01.056997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.824 [2024-07-14 21:20:01.234635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.082 [2024-07-14 21:20:01.413445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.082 [2024-07-14 21:20:01.413520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.082 [2024-07-14 21:20:01.413536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.082 [2024-07-14 21:20:01.413549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.082 [2024-07-14 21:20:01.413563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.082 [2024-07-14 21:20:01.413844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.082 [2024-07-14 21:20:01.413947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.082 [2024-07-14 21:20:01.414497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.082 [2024-07-14 21:20:01.414509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.082 [2024-07-14 21:20:01.585760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:50.648 21:20:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.648 21:20:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:50.648 21:20:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.648 21:20:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.648 21:20:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:50.648 21:20:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.648 21:20:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:50.648 21:20:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:51.213 21:20:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:51.213 21:20:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:51.213 21:20:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:51.213 21:20:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:51.779 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:51.779 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:51.779 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:51.779 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:51.779 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.037 [2024-07-14 21:20:03.379655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.037 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.295 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.295 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.553 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.553 21:20:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:52.811 21:20:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.068 [2024-07-14 21:20:04.517586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.068 21:20:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:53.327 21:20:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:53.327 21:20:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:53.327 21:20:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:53.327 21:20:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:54.699 Initializing NVMe Controllers 00:20:54.699 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:54.699 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:54.699 Initialization complete. Launching workers. 00:20:54.699 ======================================================== 00:20:54.699 Latency(us) 00:20:54.699 Device Information : IOPS MiB/s Average min max 00:20:54.700 PCIE (0000:00:10.0) NSID 1 from core 0: 22409.92 87.54 1428.01 378.76 8079.86 00:20:54.700 ======================================================== 00:20:54.700 Total : 22409.92 87.54 1428.01 378.76 8079.86 00:20:54.700 00:20:54.700 21:20:06 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.116 Initializing NVMe Controllers 00:20:56.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:56.116 Initialization complete. Launching workers. 00:20:56.116 ======================================================== 00:20:56.116 Latency(us) 00:20:56.117 Device Information : IOPS MiB/s Average min max 00:20:56.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2495.00 9.75 398.51 145.76 5499.20 00:20:56.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.12 6652.47 11986.62 00:20:56.117 ======================================================== 00:20:56.117 Total : 2619.00 10.23 764.48 145.76 11986.62 00:20:56.117 00:20:56.117 21:20:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.489 Initializing NVMe Controllers 00:20:57.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:57.489 Initialization complete. Launching workers. 00:20:57.489 ======================================================== 00:20:57.489 Latency(us) 00:20:57.489 Device Information : IOPS MiB/s Average min max 00:20:57.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5868.95 22.93 5479.54 830.16 12853.72 00:20:57.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3738.97 14.61 8601.36 4981.07 20267.22 00:20:57.489 ======================================================== 00:20:57.489 Total : 9607.92 37.53 6694.41 830.16 20267.22 00:20:57.489 00:20:57.489 21:20:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:57.489 21:20:09 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:00.776 Initializing NVMe Controllers 00:21:00.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.776 Controller IO queue size 128, less than required. 00:21:00.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.776 Controller IO queue size 128, less than required. 00:21:00.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:00.776 Initialization complete. Launching workers. 00:21:00.776 ======================================================== 00:21:00.776 Latency(us) 00:21:00.776 Device Information : IOPS MiB/s Average min max 00:21:00.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1301.92 325.48 103126.52 54590.15 328380.35 00:21:00.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.46 154.12 225349.02 91371.63 494243.84 00:21:00.776 ======================================================== 00:21:00.776 Total : 1918.38 479.59 142402.08 54590.15 494243.84 00:21:00.776 00:21:00.776 21:20:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:00.776 Initializing NVMe Controllers 00:21:00.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.776 Controller IO queue size 128, less than required. 00:21:00.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.776 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:00.776 Controller IO queue size 128, less than required. 00:21:00.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.776 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:00.776 WARNING: Some requested NVMe devices were skipped 00:21:00.776 No valid NVMe controllers or AIO or URING devices found 00:21:00.776 21:20:12 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:04.064 Initializing NVMe Controllers 00:21:04.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.064 Controller IO queue size 128, less than required. 00:21:04.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.064 Controller IO queue size 128, less than required. 00:21:04.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:04.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:04.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:04.064 Initialization complete. Launching workers. 00:21:04.064 00:21:04.064 ==================== 00:21:04.064 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:04.064 TCP transport: 00:21:04.064 polls: 8090 00:21:04.064 idle_polls: 5208 00:21:04.064 sock_completions: 2882 00:21:04.064 nvme_completions: 5523 00:21:04.064 submitted_requests: 8286 00:21:04.064 queued_requests: 1 00:21:04.064 00:21:04.064 ==================== 00:21:04.064 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:04.064 TCP transport: 00:21:04.064 polls: 8794 00:21:04.064 idle_polls: 5142 00:21:04.064 sock_completions: 3652 00:21:04.064 nvme_completions: 5787 00:21:04.064 submitted_requests: 8584 00:21:04.064 queued_requests: 1 00:21:04.064 ======================================================== 00:21:04.064 Latency(us) 00:21:04.064 Device Information : IOPS MiB/s Average min max 00:21:04.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1380.20 345.05 99786.58 44003.29 393595.72 00:21:04.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1446.19 361.55 88843.11 45903.04 261878.86 00:21:04.064 ======================================================== 00:21:04.064 Total : 2826.39 706.60 94187.10 44003.29 393595.72 00:21:04.064 00:21:04.064 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:04.064 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.064 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:04.064 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:21:04.064 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0607f5f0-e4bf-429c-b4c7-8c38ff86316f 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0607f5f0-e4bf-429c-b4c7-8c38ff86316f 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=0607f5f0-e4bf-429c-b4c7-8c38ff86316f 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:21:04.323 21:20:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:04.582 { 00:21:04.582 "uuid": "0607f5f0-e4bf-429c-b4c7-8c38ff86316f", 00:21:04.582 "name": "lvs_0", 00:21:04.582 "base_bdev": "Nvme0n1", 00:21:04.582 "total_data_clusters": 1278, 00:21:04.582 "free_clusters": 1278, 00:21:04.582 "block_size": 4096, 00:21:04.582 "cluster_size": 4194304 00:21:04.582 } 00:21:04.582 ]' 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0607f5f0-e4bf-429c-b4c7-8c38ff86316f") .free_clusters' 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0607f5f0-e4bf-429c-b4c7-8c38ff86316f") .cluster_size' 00:21:04.582 5112 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:04.582 21:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0607f5f0-e4bf-429c-b4c7-8c38ff86316f lbd_0 5112 00:21:05.149 21:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b08549b0-cc88-4ef2-a030-73ae53ee2d1b 00:21:05.149 21:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b08549b0-cc88-4ef2-a030-73ae53ee2d1b lvs_n_0 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1d988081-d6f7-4af2-8ace-3c6c256e7257 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1d988081-d6f7-4af2-8ace-3c6c256e7257 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1d988081-d6f7-4af2-8ace-3c6c256e7257 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:21:05.407 21:20:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:05.666 { 00:21:05.666 "uuid": "0607f5f0-e4bf-429c-b4c7-8c38ff86316f", 00:21:05.666 "name": "lvs_0", 00:21:05.666 "base_bdev": "Nvme0n1", 00:21:05.666 "total_data_clusters": 1278, 00:21:05.666 "free_clusters": 0, 00:21:05.666 "block_size": 4096, 00:21:05.666 "cluster_size": 4194304 00:21:05.666 }, 00:21:05.666 { 00:21:05.666 "uuid": "1d988081-d6f7-4af2-8ace-3c6c256e7257", 00:21:05.666 "name": "lvs_n_0", 00:21:05.666 "base_bdev": "b08549b0-cc88-4ef2-a030-73ae53ee2d1b", 00:21:05.666 "total_data_clusters": 1276, 00:21:05.666 "free_clusters": 1276, 00:21:05.666 "block_size": 4096, 00:21:05.666 "cluster_size": 4194304 00:21:05.666 } 00:21:05.666 ]' 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1d988081-d6f7-4af2-8ace-3c6c256e7257") .free_clusters' 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1d988081-d6f7-4af2-8ace-3c6c256e7257") .cluster_size' 00:21:05.666 5104 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:05.666 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1d988081-d6f7-4af2-8ace-3c6c256e7257 lbd_nest_0 5104 00:21:05.925 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=104a5c6a-9c04-491f-bb57-1510a40a8fb5 00:21:05.925 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.213 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:06.213 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 104a5c6a-9c04-491f-bb57-1510a40a8fb5 00:21:06.471 21:20:17 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.729 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:06.729 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:06.729 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:06.729 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:06.729 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:06.987 Initializing NVMe Controllers 00:21:06.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.987 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:06.987 WARNING: Some requested NVMe devices were skipped 00:21:06.987 No valid NVMe controllers or AIO or URING devices found 00:21:06.987 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:06.987 21:20:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.200 Initializing NVMe Controllers 00:21:19.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.200 Initialization complete. Launching workers. 00:21:19.200 ======================================================== 00:21:19.200 Latency(us) 00:21:19.200 Device Information : IOPS MiB/s Average min max 00:21:19.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 839.00 104.87 1190.68 377.47 9796.08 00:21:19.200 ======================================================== 00:21:19.200 Total : 839.00 104.87 1190.68 377.47 9796.08 00:21:19.200 00:21:19.200 21:20:28 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:19.200 21:20:28 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.200 21:20:28 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.200 Initializing NVMe Controllers 00:21:19.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.200 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:19.200 WARNING: Some requested NVMe devices were skipped 00:21:19.200 No valid NVMe controllers or AIO or URING devices found 00:21:19.200 21:20:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.200 21:20:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.173 Initializing NVMe Controllers 00:21:29.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:29.173 Initialization complete. Launching workers. 00:21:29.173 ======================================================== 00:21:29.173 Latency(us) 00:21:29.173 Device Information : IOPS MiB/s Average min max 00:21:29.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1309.70 163.71 24476.82 6386.11 70315.12 00:21:29.173 ======================================================== 00:21:29.173 Total : 1309.70 163.71 24476.82 6386.11 70315.12 00:21:29.173 00:21:29.173 21:20:39 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:29.173 21:20:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:29.174 21:20:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.174 Initializing NVMe Controllers 00:21:29.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.174 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:29.174 WARNING: Some requested NVMe devices were skipped 00:21:29.174 No valid NVMe controllers or AIO or URING devices found 00:21:29.174 21:20:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:29.174 21:20:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:39.164 Initializing NVMe Controllers 00:21:39.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:39.164 Controller IO queue size 128, less than required. 00:21:39.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:39.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:39.164 Initialization complete. Launching workers. 00:21:39.164 ======================================================== 00:21:39.164 Latency(us) 00:21:39.164 Device Information : IOPS MiB/s Average min max 00:21:39.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3419.38 427.42 37456.27 12915.78 88497.56 00:21:39.164 ======================================================== 00:21:39.164 Total : 3419.38 427.42 37456.27 12915.78 88497.56 00:21:39.164 00:21:39.164 21:20:50 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.422 21:20:50 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 104a5c6a-9c04-491f-bb57-1510a40a8fb5 00:21:39.680 21:20:51 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:39.938 21:20:51 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b08549b0-cc88-4ef2-a030-73ae53ee2d1b 00:21:40.503 21:20:51 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:40.503 21:20:52 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:40.503 21:20:52 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:40.504 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.504 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:40.504 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.504 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:40.504 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.504 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.504 rmmod nvme_tcp 00:21:40.504 rmmod nvme_fabrics 00:21:40.504 rmmod nvme_keyring 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 80199 ']' 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 80199 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 80199 ']' 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 80199 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80199 00:21:40.762 killing process with pid 80199 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80199' 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 80199 00:21:40.762 21:20:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 80199 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:43.294 00:21:43.294 real 0m53.811s 00:21:43.294 user 3m22.339s 00:21:43.294 sys 0m12.401s 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.294 21:20:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 ************************************ 00:21:43.294 END TEST nvmf_perf 00:21:43.294 ************************************ 00:21:43.294 21:20:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:43.294 21:20:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:43.294 21:20:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.294 21:20:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.294 21:20:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 ************************************ 00:21:43.294 START TEST nvmf_fio_host 00:21:43.294 ************************************ 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:43.294 * Looking for test storage... 00:21:43.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.294 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:43.295 Cannot find device "nvmf_tgt_br" 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.295 Cannot find device "nvmf_tgt_br2" 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:43.295 Cannot find device "nvmf_tgt_br" 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:43.295 Cannot find device "nvmf_tgt_br2" 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:43.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:21:43.295 00:21:43.295 --- 10.0.0.2 ping statistics --- 00:21:43.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.295 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:43.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:21:43.295 00:21:43.295 --- 10.0.0.3 ping statistics --- 00:21:43.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.295 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:21:43.295 00:21:43.295 --- 10.0.0.1 ping statistics --- 00:21:43.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.295 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=81045 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 81045 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 81045 ']' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.295 21:20:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.554 [2024-07-14 21:20:54.940601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:43.554 [2024-07-14 21:20:54.941105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.813 [2024-07-14 21:20:55.128643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.813 [2024-07-14 21:20:55.352924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.813 [2024-07-14 21:20:55.353306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.813 [2024-07-14 21:20:55.353466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.813 [2024-07-14 21:20:55.353808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.813 [2024-07-14 21:20:55.354026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.813 [2024-07-14 21:20:55.354191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.813 [2024-07-14 21:20:55.354357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.813 [2024-07-14 21:20:55.354750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.813 [2024-07-14 21:20:55.354796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.072 [2024-07-14 21:20:55.544551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:44.639 21:20:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.639 21:20:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:44.639 21:20:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:44.639 [2024-07-14 21:20:56.153605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.897 21:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:44.897 21:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.897 21:20:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.897 21:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:45.156 Malloc1 00:21:45.156 21:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.428 21:20:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:45.684 21:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.942 [2024-07-14 21:20:57.235972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.942 21:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:46.200 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:46.201 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:46.201 21:20:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.201 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:46.201 fio-3.35 00:21:46.201 Starting 1 thread 00:21:48.731 00:21:48.731 test: (groupid=0, jobs=1): err= 0: pid=81118: Sun Jul 14 21:21:00 2024 00:21:48.731 read: IOPS=6844, BW=26.7MiB/s (28.0MB/s)(53.7MiB/2008msec) 00:21:48.731 slat (usec): min=2, max=288, avg= 3.72, stdev= 3.56 00:21:48.731 clat (usec): min=2304, max=18084, avg=9686.93, stdev=714.05 00:21:48.731 lat (usec): min=2351, max=18088, avg=9690.66, stdev=713.74 00:21:48.731 clat percentiles (usec): 00:21:48.731 | 1.00th=[ 8356], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:21:48.731 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:21:48.731 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:21:48.731 | 99.00th=[11076], 99.50th=[11600], 99.90th=[16188], 99.95th=[16909], 00:21:48.731 | 99.99th=[17957] 00:21:48.731 bw ( KiB/s): min=26160, max=28184, per=99.94%, avg=27362.00, stdev=875.66, samples=4 00:21:48.731 iops : min= 6540, max= 7046, avg=6840.50, stdev=218.91, samples=4 00:21:48.731 write: IOPS=6853, BW=26.8MiB/s (28.1MB/s)(53.8MiB/2008msec); 0 zone resets 00:21:48.731 slat (usec): min=3, max=153, avg= 4.02, stdev= 2.21 00:21:48.731 clat (usec): min=2158, max=16977, avg=8877.01, stdev=671.08 00:21:48.731 lat (usec): min=2174, max=16981, avg=8881.03, stdev=670.88 00:21:48.731 clat percentiles (usec): 00:21:48.731 | 1.00th=[ 7570], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8455], 00:21:48.731 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:48.731 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:21:48.731 | 99.00th=[10290], 99.50th=[10683], 99.90th=[15795], 99.95th=[16319], 00:21:48.731 | 99.99th=[16909] 00:21:48.731 bw ( KiB/s): min=27008, max=27760, per=99.93%, avg=27394.00, stdev=346.40, samples=4 00:21:48.731 iops : min= 6752, max= 6940, avg=6848.50, stdev=86.60, samples=4 00:21:48.731 lat (msec) : 4=0.11%, 10=84.36%, 20=15.54% 00:21:48.731 cpu : usr=68.11%, sys=23.67%, ctx=13, majf=0, minf=1539 00:21:48.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:48.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.731 issued rwts: total=13744,13761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.731 00:21:48.731 Run status group 0 (all jobs): 00:21:48.731 READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=53.7MiB (56.3MB), run=2008-2008msec 00:21:48.731 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=53.8MiB (56.4MB), run=2008-2008msec 00:21:48.731 ----------------------------------------------------- 00:21:48.731 Suppressions used: 00:21:48.731 count bytes template 00:21:48.731 1 57 /usr/src/fio/parse.c 00:21:48.731 1 8 libtcmalloc_minimal.so 00:21:48.731 ----------------------------------------------------- 00:21:48.731 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:48.731 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:48.989 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:48.989 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:48.989 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:48.989 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:48.989 21:21:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:48.989 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:48.989 fio-3.35 00:21:48.989 Starting 1 thread 00:21:51.519 00:21:51.519 test: (groupid=0, jobs=1): err= 0: pid=81164: Sun Jul 14 21:21:02 2024 00:21:51.519 read: IOPS=6693, BW=105MiB/s (110MB/s)(210MiB/2011msec) 00:21:51.519 slat (usec): min=3, max=138, avg= 5.00, stdev= 2.26 00:21:51.519 clat (usec): min=4106, max=21903, avg=10792.49, stdev=3411.96 00:21:51.519 lat (usec): min=4112, max=21908, avg=10797.49, stdev=3411.96 00:21:51.519 clat percentiles (usec): 00:21:51.519 | 1.00th=[ 5211], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7635], 00:21:51.519 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11338], 00:21:51.519 | 70.00th=[12649], 80.00th=[13566], 90.00th=[15401], 95.00th=[17433], 00:21:51.519 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:21:51.519 | 99.99th=[21890] 00:21:51.519 bw ( KiB/s): min=43456, max=61408, per=50.03%, avg=53584.00, stdev=8679.12, samples=4 00:21:51.519 iops : min= 2716, max= 3838, avg=3349.00, stdev=542.45, samples=4 00:21:51.519 write: IOPS=3905, BW=61.0MiB/s (64.0MB/s)(110MiB/1796msec); 0 zone resets 00:21:51.519 slat (usec): min=38, max=261, avg=42.33, stdev= 6.91 00:21:51.519 clat (usec): min=8201, max=27804, avg=14990.42, stdev=2527.00 00:21:51.519 lat (usec): min=8240, max=27844, avg=15032.74, stdev=2526.66 00:21:51.519 clat percentiles (usec): 00:21:51.519 | 1.00th=[10290], 5.00th=[11469], 10.00th=[12256], 20.00th=[13042], 00:21:51.519 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14615], 60.00th=[15139], 00:21:51.519 | 70.00th=[15926], 80.00th=[16909], 90.00th=[18220], 95.00th=[20055], 00:21:51.519 | 99.00th=[22676], 99.50th=[23462], 99.90th=[24511], 99.95th=[27132], 00:21:51.519 | 99.99th=[27919] 00:21:51.519 bw ( KiB/s): min=46208, max=63584, per=88.97%, avg=55600.00, stdev=8429.74, samples=4 00:21:51.519 iops : min= 2888, max= 3974, avg=3475.00, stdev=526.86, samples=4 00:21:51.519 lat (msec) : 10=31.04%, 20=66.41%, 50=2.54% 00:21:51.519 cpu : usr=79.76%, sys=14.92%, ctx=7, majf=0, minf=2118 00:21:51.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:51.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.519 issued rwts: total=13461,7015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.519 00:21:51.519 Run status group 0 (all jobs): 00:21:51.519 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=210MiB (221MB), run=2011-2011msec 00:21:51.519 WRITE: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=110MiB (115MB), run=1796-1796msec 00:21:51.519 ----------------------------------------------------- 00:21:51.519 Suppressions used: 00:21:51.519 count bytes template 00:21:51.519 1 57 /usr/src/fio/parse.c 00:21:51.519 282 27072 /usr/src/fio/iolog.c 00:21:51.519 1 8 libtcmalloc_minimal.so 00:21:51.519 ----------------------------------------------------- 00:21:51.519 00:21:51.519 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:51.788 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:52.059 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:21:52.059 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:52.059 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:21:52.317 Nvme0n1 00:21:52.317 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7ecd6839-01d0-40f0-83c5-6a9208677968 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7ecd6839-01d0-40f0-83c5-6a9208677968 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=7ecd6839-01d0-40f0-83c5-6a9208677968 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:52.574 21:21:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:52.831 { 00:21:52.831 "uuid": "7ecd6839-01d0-40f0-83c5-6a9208677968", 00:21:52.831 "name": "lvs_0", 00:21:52.831 "base_bdev": "Nvme0n1", 00:21:52.831 "total_data_clusters": 4, 00:21:52.831 "free_clusters": 4, 00:21:52.831 "block_size": 4096, 00:21:52.831 "cluster_size": 1073741824 00:21:52.831 } 00:21:52.831 ]' 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7ecd6839-01d0-40f0-83c5-6a9208677968") .free_clusters' 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7ecd6839-01d0-40f0-83c5-6a9208677968") .cluster_size' 00:21:52.831 4096 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:21:52.831 21:21:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:53.088 5fab2797-6b82-4b38-87c4-11c85b2b2fa3 00:21:53.088 21:21:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:53.346 21:21:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:53.603 21:21:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:53.861 21:21:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:54.118 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:54.118 fio-3.35 00:21:54.118 Starting 1 thread 00:21:56.648 00:21:56.648 test: (groupid=0, jobs=1): err= 0: pid=81267: Sun Jul 14 21:21:07 2024 00:21:56.648 read: IOPS=5054, BW=19.7MiB/s (20.7MB/s)(39.7MiB/2010msec) 00:21:56.648 slat (usec): min=2, max=396, avg= 3.49, stdev= 4.98 00:21:56.648 clat (usec): min=3620, max=23208, avg=13219.98, stdev=1134.50 00:21:56.648 lat (usec): min=3639, max=23212, avg=13223.47, stdev=1133.97 00:21:56.648 clat percentiles (usec): 00:21:56.648 | 1.00th=[10814], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:21:56.648 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:21:56.648 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:21:56.648 | 99.00th=[15664], 99.50th=[16450], 99.90th=[21627], 99.95th=[22676], 00:21:56.648 | 99.99th=[23200] 00:21:56.648 bw ( KiB/s): min=19352, max=20504, per=99.82%, avg=20180.00, stdev=554.43, samples=4 00:21:56.648 iops : min= 4838, max= 5126, avg=5045.00, stdev=138.61, samples=4 00:21:56.648 write: IOPS=5048, BW=19.7MiB/s (20.7MB/s)(39.6MiB/2010msec); 0 zone resets 00:21:56.648 slat (usec): min=3, max=271, avg= 3.76, stdev= 3.09 00:21:56.648 clat (usec): min=3117, max=21686, avg=11995.24, stdev=1044.94 00:21:56.648 lat (usec): min=3136, max=21690, avg=11999.00, stdev=1044.65 00:21:56.648 clat percentiles (usec): 00:21:56.648 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:21:56.648 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:21:56.648 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:21:56.648 | 99.00th=[14222], 99.50th=[14746], 99.90th=[19792], 99.95th=[21103], 00:21:56.648 | 99.99th=[21627] 00:21:56.648 bw ( KiB/s): min=19904, max=20352, per=99.93%, avg=20178.00, stdev=191.82, samples=4 00:21:56.648 iops : min= 4976, max= 5088, avg=5044.50, stdev=47.95, samples=4 00:21:56.648 lat (msec) : 4=0.03%, 10=0.90%, 20=98.91%, 50=0.16% 00:21:56.648 cpu : usr=73.07%, sys=20.91%, ctx=7, majf=0, minf=1538 00:21:56.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:56.648 issued rwts: total=10159,10147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:56.649 00:21:56.649 Run status group 0 (all jobs): 00:21:56.649 READ: bw=19.7MiB/s (20.7MB/s), 19.7MiB/s-19.7MiB/s (20.7MB/s-20.7MB/s), io=39.7MiB (41.6MB), run=2010-2010msec 00:21:56.649 WRITE: bw=19.7MiB/s (20.7MB/s), 19.7MiB/s-19.7MiB/s (20.7MB/s-20.7MB/s), io=39.6MiB (41.6MB), run=2010-2010msec 00:21:56.649 ----------------------------------------------------- 00:21:56.649 Suppressions used: 00:21:56.649 count bytes template 00:21:56.649 1 58 /usr/src/fio/parse.c 00:21:56.649 1 8 libtcmalloc_minimal.so 00:21:56.649 ----------------------------------------------------- 00:21:56.649 00:21:56.649 21:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:56.907 21:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=84a20f86-3b19-495f-8db4-6a591dc1505a 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 84a20f86-3b19-495f-8db4-6a591dc1505a 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=84a20f86-3b19-495f-8db4-6a591dc1505a 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:57.165 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:57.422 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:57.422 { 00:21:57.422 "uuid": "7ecd6839-01d0-40f0-83c5-6a9208677968", 00:21:57.422 "name": "lvs_0", 00:21:57.422 "base_bdev": "Nvme0n1", 00:21:57.422 "total_data_clusters": 4, 00:21:57.422 "free_clusters": 0, 00:21:57.422 "block_size": 4096, 00:21:57.422 "cluster_size": 1073741824 00:21:57.422 }, 00:21:57.422 { 00:21:57.422 "uuid": "84a20f86-3b19-495f-8db4-6a591dc1505a", 00:21:57.422 "name": "lvs_n_0", 00:21:57.422 "base_bdev": "5fab2797-6b82-4b38-87c4-11c85b2b2fa3", 00:21:57.422 "total_data_clusters": 1022, 00:21:57.422 "free_clusters": 1022, 00:21:57.422 "block_size": 4096, 00:21:57.422 "cluster_size": 4194304 00:21:57.422 } 00:21:57.422 ]' 00:21:57.422 21:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="84a20f86-3b19-495f-8db4-6a591dc1505a") .free_clusters' 00:21:57.680 21:21:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:21:57.680 21:21:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="84a20f86-3b19-495f-8db4-6a591dc1505a") .cluster_size' 00:21:57.680 4088 00:21:57.680 21:21:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:57.680 21:21:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:21:57.680 21:21:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:21:57.680 21:21:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:57.938 a07d9bfe-0cbb-4e2f-a248-a562a2a75d4b 00:21:57.938 21:21:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:58.197 21:21:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:58.455 21:21:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:58.713 21:21:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:58.713 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:58.713 fio-3.35 00:21:58.713 Starting 1 thread 00:22:01.241 00:22:01.241 test: (groupid=0, jobs=1): err= 0: pid=81340: Sun Jul 14 21:21:12 2024 00:22:01.241 read: IOPS=4499, BW=17.6MiB/s (18.4MB/s)(35.4MiB/2012msec) 00:22:01.241 slat (usec): min=2, max=248, avg= 3.51, stdev= 3.38 00:22:01.241 clat (usec): min=3751, max=23776, avg=14818.58, stdev=1255.20 00:22:01.241 lat (usec): min=3758, max=23779, avg=14822.10, stdev=1254.85 00:22:01.241 clat percentiles (usec): 00:22:01.241 | 1.00th=[12125], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:22:01.241 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:22:01.241 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:22:01.241 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21890], 99.95th=[21890], 00:22:01.241 | 99.99th=[23725] 00:22:01.241 bw ( KiB/s): min=17064, max=18352, per=99.97%, avg=17992.00, stdev=622.80, samples=4 00:22:01.241 iops : min= 4266, max= 4588, avg=4498.00, stdev=155.70, samples=4 00:22:01.241 write: IOPS=4502, BW=17.6MiB/s (18.4MB/s)(35.4MiB/2012msec); 0 zone resets 00:22:01.241 slat (usec): min=2, max=196, avg= 3.61, stdev= 2.41 00:22:01.241 clat (usec): min=2409, max=23814, avg=13415.50, stdev=1217.93 00:22:01.241 lat (usec): min=2427, max=23818, avg=13419.10, stdev=1217.78 00:22:01.241 clat percentiles (usec): 00:22:01.241 | 1.00th=[10945], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:22:01.241 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:22:01.241 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:22:01.241 | 99.00th=[16188], 99.50th=[16909], 99.90th=[21365], 99.95th=[21627], 00:22:01.241 | 99.99th=[23725] 00:22:01.241 bw ( KiB/s): min=17888, max=18056, per=99.82%, avg=17978.00, stdev=86.50, samples=4 00:22:01.241 iops : min= 4472, max= 4514, avg=4494.50, stdev=21.63, samples=4 00:22:01.241 lat (msec) : 4=0.04%, 10=0.36%, 20=99.36%, 50=0.23% 00:22:01.241 cpu : usr=74.64%, sys=19.99%, ctx=6, majf=0, minf=1538 00:22:01.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:01.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:01.241 issued rwts: total=9053,9059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:01.241 00:22:01.241 Run status group 0 (all jobs): 00:22:01.241 READ: bw=17.6MiB/s (18.4MB/s), 17.6MiB/s-17.6MiB/s (18.4MB/s-18.4MB/s), io=35.4MiB (37.1MB), run=2012-2012msec 00:22:01.241 WRITE: bw=17.6MiB/s (18.4MB/s), 17.6MiB/s-17.6MiB/s (18.4MB/s-18.4MB/s), io=35.4MiB (37.1MB), run=2012-2012msec 00:22:01.499 ----------------------------------------------------- 00:22:01.499 Suppressions used: 00:22:01.499 count bytes template 00:22:01.499 1 58 /usr/src/fio/parse.c 00:22:01.499 1 8 libtcmalloc_minimal.so 00:22:01.499 ----------------------------------------------------- 00:22:01.499 00:22:01.499 21:21:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:01.758 21:21:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:22:01.758 21:21:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:02.015 21:21:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:02.273 21:21:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:02.531 21:21:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:02.790 21:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.048 rmmod nvme_tcp 00:22:03.048 rmmod nvme_fabrics 00:22:03.048 rmmod nvme_keyring 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 81045 ']' 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 81045 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 81045 ']' 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 81045 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81045 00:22:03.048 killing process with pid 81045 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81045' 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 81045 00:22:03.048 21:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 81045 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.953 21:21:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.953 21:21:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:04.953 00:22:04.953 real 0m21.695s 00:22:04.953 user 1m33.665s 00:22:04.953 sys 0m4.697s 00:22:04.953 21:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.953 21:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.953 ************************************ 00:22:04.954 END TEST nvmf_fio_host 00:22:04.954 ************************************ 00:22:04.954 21:21:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:04.954 21:21:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:04.954 21:21:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:04.954 21:21:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.954 21:21:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.954 ************************************ 00:22:04.954 START TEST nvmf_failover 00:22:04.954 ************************************ 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:04.954 * Looking for test storage... 00:22:04.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:04.954 Cannot find device "nvmf_tgt_br" 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:04.954 Cannot find device "nvmf_tgt_br2" 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:04.954 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:04.954 Cannot find device "nvmf_tgt_br" 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:04.955 Cannot find device "nvmf_tgt_br2" 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:04.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:04.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:04.955 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:05.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:05.215 00:22:05.215 --- 10.0.0.2 ping statistics --- 00:22:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.215 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:05.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:05.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:22:05.215 00:22:05.215 --- 10.0.0.3 ping statistics --- 00:22:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.215 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:05.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:05.215 00:22:05.215 --- 10.0.0.1 ping statistics --- 00:22:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.215 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:05.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=81585 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 81585 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81585 ']' 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.215 21:21:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:05.215 [2024-07-14 21:21:16.679709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:05.215 [2024-07-14 21:21:16.679889] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.475 [2024-07-14 21:21:16.861173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:05.733 [2024-07-14 21:21:17.058183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.733 [2024-07-14 21:21:17.058276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.733 [2024-07-14 21:21:17.058292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.733 [2024-07-14 21:21:17.058305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.733 [2024-07-14 21:21:17.058314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.733 [2024-07-14 21:21:17.059194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.733 [2024-07-14 21:21:17.059370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.733 [2024-07-14 21:21:17.059380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.733 [2024-07-14 21:21:17.226600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.312 21:21:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:06.571 [2024-07-14 21:21:17.874821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.571 21:21:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:06.830 Malloc0 00:22:06.830 21:21:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.090 21:21:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.090 21:21:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.350 [2024-07-14 21:21:18.824685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.350 21:21:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:07.609 [2024-07-14 21:21:19.092988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.609 21:21:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:07.868 [2024-07-14 21:21:19.317216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81643 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:07.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81643 /var/tmp/bdevperf.sock 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81643 ']' 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.868 21:21:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:09.243 21:21:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.243 21:21:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:09.243 21:21:20 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.243 NVMe0n1 00:22:09.243 21:21:20 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.501 00:22:09.759 21:21:21 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81672 00:22:09.759 21:21:21 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:09.759 21:21:21 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:10.694 21:21:22 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.952 21:21:22 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:14.233 21:21:25 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.233 00:22:14.233 21:21:25 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:14.491 [2024-07-14 21:21:26.018307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:22:14.491 21:21:26 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:17.768 21:21:29 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.768 [2024-07-14 21:21:29.297387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.026 21:21:29 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:18.960 21:21:30 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:19.218 21:21:30 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 81672 00:22:25.777 0 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 81643 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81643 ']' 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81643 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81643 00:22:25.777 killing process with pid 81643 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81643' 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81643 00:22:25.777 21:21:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81643 00:22:26.044 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:26.044 [2024-07-14 21:21:19.425689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:26.045 [2024-07-14 21:21:19.425898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81643 ] 00:22:26.045 [2024-07-14 21:21:19.584667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.045 [2024-07-14 21:21:19.770489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.045 [2024-07-14 21:21:19.933899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:26.045 Running I/O for 15 seconds... 00:22:26.045 [2024-07-14 21:21:22.325389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.045 [2024-07-14 21:21:22.325506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.325539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.045 [2024-07-14 21:21:22.325564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.325586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.045 [2024-07-14 21:21:22.325609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.325630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.045 [2024-07-14 21:21:22.325653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.325674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:22:26.045 [2024-07-14 21:21:22.326074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.045 [2024-07-14 21:21:22.326111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.326975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.326997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.045 [2024-07-14 21:21:22.327876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.045 [2024-07-14 21:21:22.327897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.327925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.327947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.327973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.327995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.328955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.328984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.329964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.329986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.330012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.330035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.330060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.330082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.046 [2024-07-14 21:21:22.330107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.046 [2024-07-14 21:21:22.330129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.330961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.330988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.047 [2024-07-14 21:21:22.331784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.331835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.331886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.331933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.331958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.331980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.047 [2024-07-14 21:21:22.332303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.047 [2024-07-14 21:21:22.332326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.332351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:22.332383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.332414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:22.332436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.332462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:22.332483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.332509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:22.332530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.332556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:22.332577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.332601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:22:26.048 [2024-07-14 21:21:22.332628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.048 [2024-07-14 21:21:22.332653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.048 [2024-07-14 21:21:22.332672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50384 len:8 PRP1 0x0 PRP2 0x0 00:22:26.048 [2024-07-14 21:21:22.332693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:22.333000] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:22:26.048 [2024-07-14 21:21:22.333032] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:26.048 [2024-07-14 21:21:22.333055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.048 [2024-07-14 21:21:22.337172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.048 [2024-07-14 21:21:22.337233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:26.048 [2024-07-14 21:21:22.382991] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.048 [2024-07-14 21:21:26.019091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.019623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.019964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.019987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.048 [2024-07-14 21:21:26.020354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.048 [2024-07-14 21:21:26.020680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.048 [2024-07-14 21:21:26.020703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.020723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.020782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.020805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.020828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.020856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.020880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.020900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.020923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.020943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.020966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.020986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.049 [2024-07-14 21:21:26.021838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.021968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.021990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.049 [2024-07-14 21:21:26.022322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.049 [2024-07-14 21:21:26.022344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.022365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.022408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.022451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.022493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.022536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.022993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.050 [2024-07-14 21:21:26.023609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.023959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.023981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.024001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.024023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.024044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.024066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.024086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.050 [2024-07-14 21:21:26.024129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.024150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(5) to be set 00:22:26.050 [2024-07-14 21:21:26.024175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.050 [2024-07-14 21:21:26.024194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.050 [2024-07-14 21:21:26.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:8 PRP1 0x0 PRP2 0x0 00:22:26.050 [2024-07-14 21:21:26.024232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.050 [2024-07-14 21:21:26.024253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.050 [2024-07-14 21:21:26.024269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.050 [2024-07-14 21:21:26.024285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:840 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:848 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:856 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1256 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1264 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1272 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1288 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.024936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.024951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1296 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.024970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.024997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1304 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1320 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1328 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1336 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1352 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.051 [2024-07-14 21:21:26.025500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.051 [2024-07-14 21:21:26.025516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1360 len:8 PRP1 0x0 PRP2 0x0 00:22:26.051 [2024-07-14 21:21:26.025542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025814] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:22:26.051 [2024-07-14 21:21:26.025845] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:26.051 [2024-07-14 21:21:26.025926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.051 [2024-07-14 21:21:26.025957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.025984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.051 [2024-07-14 21:21:26.026006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.026027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.051 [2024-07-14 21:21:26.026046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.026066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.051 [2024-07-14 21:21:26.026086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:26.026106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.051 [2024-07-14 21:21:26.026179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:26.051 [2024-07-14 21:21:26.030256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.051 [2024-07-14 21:21:26.084701] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.051 [2024-07-14 21:21:30.588242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:30.588408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:30.588460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:30.588503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:30.588547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:30.588589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.051 [2024-07-14 21:21:30.588655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.051 [2024-07-14 21:21:30.588677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.588700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.588739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.588803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.588826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.588846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.588869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.588896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.588919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.588938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.588960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.588980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.589866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.589978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.589999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.052 [2024-07-14 21:21:30.590565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.590608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.052 [2024-07-14 21:21:30.590630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.052 [2024-07-14 21:21:30.590651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.590951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.590980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.591284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.591971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.591993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.053 [2024-07-14 21:21:30.592013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.053 [2024-07-14 21:21:30.592324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.053 [2024-07-14 21:21:30.592348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.054 [2024-07-14 21:21:30.592734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.592794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.592854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.592897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.592940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.592962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.592983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.054 [2024-07-14 21:21:30.593430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:22:26.054 [2024-07-14 21:21:30.593475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4488 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4496 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.593939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.593954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.593969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.593988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.594007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.594022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.594037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.594075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.594090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.594105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.594127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.594147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.594162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.594178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.594197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.594216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.594231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.054 [2024-07-14 21:21:30.594246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0 00:22:26.054 [2024-07-14 21:21:30.594265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.054 [2024-07-14 21:21:30.594284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.054 [2024-07-14 21:21:30.594298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.055 [2024-07-14 21:21:30.594314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:22:26.055 [2024-07-14 21:21:30.594333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.594367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.055 [2024-07-14 21:21:30.594383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.055 [2024-07-14 21:21:30.594399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0 00:22:26.055 [2024-07-14 21:21:30.594427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.594447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.055 [2024-07-14 21:21:30.594462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.055 [2024-07-14 21:21:30.594477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4584 len:8 PRP1 0x0 PRP2 0x0 00:22:26.055 [2024-07-14 21:21:30.594496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.594515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.055 [2024-07-14 21:21:30.594530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.055 [2024-07-14 21:21:30.594546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0 00:22:26.055 [2024-07-14 21:21:30.594565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.594584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.055 [2024-07-14 21:21:30.594598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.055 [2024-07-14 21:21:30.594614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:8 PRP1 0x0 PRP2 0x0 00:22:26.055 [2024-07-14 21:21:30.594633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.594652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.055 [2024-07-14 21:21:30.594666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.055 [2024-07-14 21:21:30.594682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:8 PRP1 0x0 PRP2 0x0 00:22:26.055 [2024-07-14 21:21:30.594703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.594975] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:22:26.055 [2024-07-14 21:21:30.595006] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:26.055 [2024-07-14 21:21:30.595078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-07-14 21:21:30.595108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.595131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-07-14 21:21:30.595151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.595171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-07-14 21:21:30.595191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.595211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.055 [2024-07-14 21:21:30.595231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.055 [2024-07-14 21:21:30.595250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.055 [2024-07-14 21:21:30.595320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:26.055 [2024-07-14 21:21:30.599428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.055 [2024-07-14 21:21:30.648210] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.055 00:22:26.055 Latency(us) 00:22:26.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:26.055 Verification LBA range: start 0x0 length 0x4000 00:22:26.055 NVMe0n1 : 15.01 6776.78 26.47 244.40 0.00 18189.86 867.61 21448.15 00:22:26.055 =================================================================================================================== 00:22:26.055 Total : 6776.78 26.47 244.40 0.00 18189.86 867.61 21448.15 00:22:26.055 Received shutdown signal, test time was about 15.000000 seconds 00:22:26.055 00:22:26.055 Latency(us) 00:22:26.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.055 =================================================================================================================== 00:22:26.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:26.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81851 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81851 /var/tmp/bdevperf.sock 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81851 ']' 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.055 21:21:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.989 21:21:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.989 21:21:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:26.989 21:21:38 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:27.247 [2024-07-14 21:21:38.709596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:27.247 21:21:38 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:27.505 [2024-07-14 21:21:38.990037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:27.505 21:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.763 NVMe0n1 00:22:28.021 21:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.279 00:22:28.279 21:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.538 00:22:28.538 21:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.538 21:21:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:28.797 21:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.055 21:21:40 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:32.338 21:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.338 21:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:32.338 21:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81929 00:22:32.338 21:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.338 21:21:43 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 81929 00:22:33.711 0 00:22:33.711 21:21:44 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:33.711 [2024-07-14 21:21:37.539845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:33.711 [2024-07-14 21:21:37.540081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81851 ] 00:22:33.711 [2024-07-14 21:21:37.705330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.711 [2024-07-14 21:21:37.903466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.711 [2024-07-14 21:21:38.095109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:33.711 [2024-07-14 21:21:40.388098] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:33.711 [2024-07-14 21:21:40.388263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.711 [2024-07-14 21:21:40.388304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.711 [2024-07-14 21:21:40.388334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.711 [2024-07-14 21:21:40.388363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.711 [2024-07-14 21:21:40.388384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.711 [2024-07-14 21:21:40.388421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.711 [2024-07-14 21:21:40.388442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.711 [2024-07-14 21:21:40.388466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.711 [2024-07-14 21:21:40.388486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.711 [2024-07-14 21:21:40.388568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.711 [2024-07-14 21:21:40.388616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:33.711 [2024-07-14 21:21:40.400671] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.711 Running I/O for 1 seconds... 00:22:33.711 00:22:33.711 Latency(us) 00:22:33.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.711 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:33.711 Verification LBA range: start 0x0 length 0x4000 00:22:33.711 NVMe0n1 : 1.01 5194.02 20.29 0.00 0.00 24532.72 3276.80 21924.77 00:22:33.711 =================================================================================================================== 00:22:33.711 Total : 5194.02 20.29 0.00 0.00 24532.72 3276.80 21924.77 00:22:33.711 21:21:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:33.711 21:21:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:33.711 21:21:45 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:33.970 21:21:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:33.970 21:21:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:34.228 21:21:45 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:34.485 21:21:45 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:37.779 21:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:37.779 21:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 81851 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81851 ']' 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81851 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81851 00:22:37.779 killing process with pid 81851 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81851' 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81851 00:22:37.779 21:21:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81851 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.155 rmmod nvme_tcp 00:22:39.155 rmmod nvme_fabrics 00:22:39.155 rmmod nvme_keyring 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 81585 ']' 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 81585 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81585 ']' 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81585 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81585 00:22:39.155 killing process with pid 81585 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81585' 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81585 00:22:39.155 21:21:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81585 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:41.058 00:22:41.058 real 0m36.106s 00:22:41.058 user 2m18.002s 00:22:41.058 sys 0m5.416s 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.058 21:21:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:41.058 ************************************ 00:22:41.058 END TEST nvmf_failover 00:22:41.058 ************************************ 00:22:41.058 21:21:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.058 21:21:52 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:41.058 21:21:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:41.058 21:21:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.058 21:21:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.058 ************************************ 00:22:41.058 START TEST nvmf_host_discovery 00:22:41.058 ************************************ 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:41.058 * Looking for test storage... 00:22:41.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:41.058 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:41.059 Cannot find device "nvmf_tgt_br" 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:41.059 Cannot find device "nvmf_tgt_br2" 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:41.059 Cannot find device "nvmf_tgt_br" 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:41.059 Cannot find device "nvmf_tgt_br2" 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:41.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:41.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:41.059 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:41.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:41.318 00:22:41.318 --- 10.0.0.2 ping statistics --- 00:22:41.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.318 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:41.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:41.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:22:41.318 00:22:41.318 --- 10.0.0.3 ping statistics --- 00:22:41.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.318 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:41.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:22:41.318 00:22:41.318 --- 10.0.0.1 ping statistics --- 00:22:41.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.318 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=82215 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 82215 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 82215 ']' 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.318 21:21:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.318 [2024-07-14 21:21:52.841057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:41.318 [2024-07-14 21:21:52.841246] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.577 [2024-07-14 21:21:53.023744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.841 [2024-07-14 21:21:53.245409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.841 [2024-07-14 21:21:53.245511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.841 [2024-07-14 21:21:53.245527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.841 [2024-07-14 21:21:53.245539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.841 [2024-07-14 21:21:53.245550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.841 [2024-07-14 21:21:53.245603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.116 [2024-07-14 21:21:53.466986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 [2024-07-14 21:21:53.862093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 [2024-07-14 21:21:53.870406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 null0 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 null1 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82247 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82247 /tmp/host.sock 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 82247 ']' 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.374 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.374 21:21:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.632 [2024-07-14 21:21:54.011461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:42.633 [2024-07-14 21:21:54.011651] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82247 ] 00:22:42.633 [2024-07-14 21:21:54.179991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.891 [2024-07-14 21:21:54.391142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.149 [2024-07-14 21:21:54.591475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.715 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 [2024-07-14 21:21:55.323560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.974 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.232 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:44.232 21:21:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:44.490 [2024-07-14 21:21:55.987696] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:44.490 [2024-07-14 21:21:55.987736] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:44.490 [2024-07-14 21:21:55.987774] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:44.490 [2024-07-14 21:21:55.993762] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:44.749 [2024-07-14 21:21:56.059282] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:44.749 [2024-07-14 21:21:56.059319] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.008 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.008 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:45.008 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:45.267 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.268 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.527 [2024-07-14 21:21:56.907241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.527 [2024-07-14 21:21:56.908377] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:45.527 [2024-07-14 21:21:56.908453] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.527 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:45.528 [2024-07-14 21:21:56.914472] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.528 21:21:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.528 [2024-07-14 21:21:56.980209] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.528 [2024-07-14 21:21:56.980264] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.528 [2024-07-14 21:21:56.980277] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:45.528 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.787 [2024-07-14 21:21:57.148618] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:45.787 [2024-07-14 21:21:57.148668] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:45.787 [2024-07-14 21:21:57.154642] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:45.787 [2024-07-14 21:21:57.154696] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:45.787 [2024-07-14 21:21:57.154879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.787 [2024-07-14 21:21:57.154923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 21:21:57.154951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.787 [2024-07-14 21:21:57.154966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 21:21:57.154980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.787 [2024-07-14 21:21:57.154994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 21:21:57.155008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.787 [2024-07-14 21:21:57.155022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 21:21:57.155036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.787 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:45.788 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.047 21:21:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.424 [2024-07-14 21:21:58.579114] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:47.424 [2024-07-14 21:21:58.579152] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:47.424 [2024-07-14 21:21:58.579184] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:47.424 [2024-07-14 21:21:58.585188] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:47.424 [2024-07-14 21:21:58.655995] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:47.424 [2024-07-14 21:21:58.656051] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:47.424 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 request: 00:22:47.425 { 00:22:47.425 "name": "nvme", 00:22:47.425 "trtype": "tcp", 00:22:47.425 "traddr": "10.0.0.2", 00:22:47.425 "adrfam": "ipv4", 00:22:47.425 "trsvcid": "8009", 00:22:47.425 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:47.425 "wait_for_attach": true, 00:22:47.425 "method": "bdev_nvme_start_discovery", 00:22:47.425 "req_id": 1 00:22:47.425 } 00:22:47.425 Got JSON-RPC error response 00:22:47.425 response: 00:22:47.425 { 00:22:47.425 "code": -17, 00:22:47.425 "message": "File exists" 00:22:47.425 } 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 request: 00:22:47.425 { 00:22:47.425 "name": "nvme_second", 00:22:47.425 "trtype": "tcp", 00:22:47.425 "traddr": "10.0.0.2", 00:22:47.425 "adrfam": "ipv4", 00:22:47.425 "trsvcid": "8009", 00:22:47.425 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:47.425 "wait_for_attach": true, 00:22:47.425 "method": "bdev_nvme_start_discovery", 00:22:47.425 "req_id": 1 00:22:47.425 } 00:22:47.425 Got JSON-RPC error response 00:22:47.425 response: 00:22:47.425 { 00:22:47.425 "code": -17, 00:22:47.425 "message": "File exists" 00:22:47.425 } 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.425 21:21:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.802 [2024-07-14 21:21:59.952824] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.802 [2024-07-14 21:21:59.952895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:22:48.802 [2024-07-14 21:21:59.952964] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:48.802 [2024-07-14 21:21:59.952983] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:48.802 [2024-07-14 21:21:59.952999] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:49.739 [2024-07-14 21:22:00.952904] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.739 [2024-07-14 21:22:00.953244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:22:49.739 [2024-07-14 21:22:00.953476] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:49.739 [2024-07-14 21:22:00.953600] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:49.739 [2024-07-14 21:22:00.953655] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:50.675 [2024-07-14 21:22:01.952574] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:50.676 request: 00:22:50.676 { 00:22:50.676 "name": "nvme_second", 00:22:50.676 "trtype": "tcp", 00:22:50.676 "traddr": "10.0.0.2", 00:22:50.676 "adrfam": "ipv4", 00:22:50.676 "trsvcid": "8010", 00:22:50.676 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:50.676 "wait_for_attach": false, 00:22:50.676 "attach_timeout_ms": 3000, 00:22:50.676 "method": "bdev_nvme_start_discovery", 00:22:50.676 "req_id": 1 00:22:50.676 } 00:22:50.676 Got JSON-RPC error response 00:22:50.676 response: 00:22:50.676 { 00:22:50.676 "code": -110, 00:22:50.676 "message": "Connection timed out" 00:22:50.676 } 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:50.676 21:22:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82247 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.676 rmmod nvme_tcp 00:22:50.676 rmmod nvme_fabrics 00:22:50.676 rmmod nvme_keyring 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 82215 ']' 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 82215 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 82215 ']' 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 82215 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82215 00:22:50.676 killing process with pid 82215 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82215' 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 82215 00:22:50.676 21:22:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 82215 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.629 21:22:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:51.888 00:22:51.888 real 0m10.951s 00:22:51.888 user 0m21.139s 00:22:51.888 sys 0m2.015s 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.888 ************************************ 00:22:51.888 END TEST nvmf_host_discovery 00:22:51.888 ************************************ 00:22:51.888 21:22:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:51.888 21:22:03 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:51.888 21:22:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:51.888 21:22:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.888 21:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.888 ************************************ 00:22:51.888 START TEST nvmf_host_multipath_status 00:22:51.888 ************************************ 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:51.888 * Looking for test storage... 00:22:51.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:51.888 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:51.889 Cannot find device "nvmf_tgt_br" 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.889 Cannot find device "nvmf_tgt_br2" 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:51.889 Cannot find device "nvmf_tgt_br" 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:51.889 Cannot find device "nvmf_tgt_br2" 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:22:51.889 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:52.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:22:52.146 00:22:52.146 --- 10.0.0.2 ping statistics --- 00:22:52.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.146 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:52.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:52.146 00:22:52.146 --- 10.0.0.3 ping statistics --- 00:22:52.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.146 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:52.146 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:22:52.405 00:22:52.405 --- 10.0.0.1 ping statistics --- 00:22:52.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.405 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=82710 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 82710 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 82710 ']' 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.405 21:22:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:52.405 [2024-07-14 21:22:03.821465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:52.405 [2024-07-14 21:22:03.821848] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.674 [2024-07-14 21:22:03.978464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:52.674 [2024-07-14 21:22:04.131374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.674 [2024-07-14 21:22:04.131686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.674 [2024-07-14 21:22:04.131865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.674 [2024-07-14 21:22:04.131997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.674 [2024-07-14 21:22:04.132045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.674 [2024-07-14 21:22:04.132335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.674 [2024-07-14 21:22:04.132351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.931 [2024-07-14 21:22:04.287732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82710 00:22:53.496 21:22:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:53.496 [2024-07-14 21:22:05.026466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.754 21:22:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:53.754 Malloc0 00:22:54.012 21:22:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:54.012 21:22:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.270 21:22:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.528 [2024-07-14 21:22:05.956677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.528 21:22:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.786 [2024-07-14 21:22:06.176937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82760 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82760 /var/tmp/bdevperf.sock 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 82760 ']' 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.786 21:22:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:55.721 21:22:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.721 21:22:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:55.721 21:22:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:55.979 21:22:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:56.547 Nvme0n1 00:22:56.547 21:22:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:56.806 Nvme0n1 00:22:56.806 21:22:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.806 21:22:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:58.709 21:22:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:58.709 21:22:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:58.967 21:22:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:59.225 21:22:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.601 21:22:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:00.858 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.858 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:00.858 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:00.858 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.117 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.117 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.117 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.117 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.375 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.375 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.375 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.375 21:22:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.633 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.634 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:01.634 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.634 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:01.891 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.891 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:01.891 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.148 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:02.406 21:22:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:03.779 21:22:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:03.779 21:22:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:03.779 21:22:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.779 21:22:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:03.779 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:03.779 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:03.779 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.779 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.037 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.037 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.037 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.037 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.295 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.295 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.295 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.295 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.553 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.553 21:22:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:04.553 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:04.553 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.814 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.814 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:04.814 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:04.814 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.080 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.080 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:05.080 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:05.338 21:22:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:05.596 21:22:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:06.530 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:06.530 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:06.530 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.530 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.096 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.355 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.355 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.355 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.355 21:22:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.613 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.613 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:07.613 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.613 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:08.253 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:08.511 21:22:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:08.769 21:22:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:09.719 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:09.719 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:09.719 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.719 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:09.977 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.977 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:09.977 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.977 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.235 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.235 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.235 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.235 21:22:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.492 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.492 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.492 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.492 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:10.750 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.750 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:10.750 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.750 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.009 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.009 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:11.009 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.009 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.267 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.267 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:11.267 21:22:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:11.526 21:22:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:11.795 21:22:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.171 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.429 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.429 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.429 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:13.429 21:22:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.688 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.688 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:13.688 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.688 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:13.946 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.946 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:13.946 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.946 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.204 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.204 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:14.204 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.205 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.463 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.463 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:14.463 21:22:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:14.721 21:22:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:14.980 21:22:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.355 21:22:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.614 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.614 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.614 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.614 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:16.872 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.872 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:16.872 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.872 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.131 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.131 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:17.389 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.389 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.389 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.389 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.389 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.389 21:22:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.648 21:22:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.648 21:22:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:18.214 21:22:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:18.214 21:22:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:18.214 21:22:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:18.779 21:22:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:19.713 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:19.713 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:19.713 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.713 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.971 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.971 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:19.971 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.971 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.229 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.229 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.229 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.229 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.486 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.486 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.486 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.486 21:22:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.745 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.745 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.745 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.745 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.003 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.003 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:21.003 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.003 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.261 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.261 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:21.261 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:21.520 21:22:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:21.778 21:22:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:22.710 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:22.710 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:22.710 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.710 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.968 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.968 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:22.968 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.968 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:23.533 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.533 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:23.533 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.533 21:22:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:23.533 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.533 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.533 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.533 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.791 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.791 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.791 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.791 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:24.049 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.049 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:24.049 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.049 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:24.616 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.616 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:24.616 21:22:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:24.616 21:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:24.874 21:22:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.247 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:26.506 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.506 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:26.506 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.506 21:22:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:26.764 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.764 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:26.764 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.764 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.021 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.021 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:27.021 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.021 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.278 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.278 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:27.278 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.278 21:22:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:27.535 21:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.535 21:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:27.535 21:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:27.793 21:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:28.051 21:22:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.425 21:22:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:29.685 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.685 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:29.685 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.685 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:29.954 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.954 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:29.954 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.954 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.229 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.229 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.229 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.229 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.487 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.487 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:30.487 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.487 21:22:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82760 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 82760 ']' 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 82760 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82760 00:23:30.744 killing process with pid 82760 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82760' 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 82760 00:23:30.744 21:22:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 82760 00:23:31.677 Connection closed with partial response: 00:23:31.677 00:23:31.677 00:23:31.939 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82760 00:23:31.939 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:31.939 [2024-07-14 21:22:06.307362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:31.939 [2024-07-14 21:22:06.307589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82760 ] 00:23:31.939 [2024-07-14 21:22:06.482531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.939 [2024-07-14 21:22:06.688315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.939 [2024-07-14 21:22:06.881232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:31.939 Running I/O for 90 seconds... 00:23:31.939 [2024-07-14 21:22:23.015873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.015969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.016875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.016927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.016964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.016986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.017038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.017089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.017140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.017204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.017279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.939 [2024-07-14 21:22:23.017333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:31.939 [2024-07-14 21:22:23.017394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.939 [2024-07-14 21:22:23.017421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.017934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.017964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.018353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.018932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.018983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.940 [2024-07-14 21:22:23.019928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.019957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.019979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.020009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.020031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:31.940 [2024-07-14 21:22:23.020060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.940 [2024-07-14 21:22:23.020081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.020131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.020181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.020262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.020313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.020372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.020933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.020959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.021698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.021957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.021986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.022007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.022058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.022107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.941 [2024-07-14 21:22:23.022156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.022206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.022255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.022304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.022354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:31.941 [2024-07-14 21:22:23.022402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.941 [2024-07-14 21:22:23.022424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.022952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.022974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.023969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:23.024009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:23.024603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:23.024630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.554598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.554653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.554703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.554769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.554955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.554976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.555253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.555341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.942 [2024-07-14 21:22:39.555565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:31.942 [2024-07-14 21:22:39.555594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.942 [2024-07-14 21:22:39.555615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.555666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.555717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.555767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.555891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.555982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.556635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.556846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.556868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.558623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.558705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.558761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.558784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.558832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.558856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.558900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.558922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.558996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.559017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.559081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.559130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.559192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.943 [2024-07-14 21:22:39.559245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.559338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.559391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:31.943 [2024-07-14 21:22:39.559420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.943 [2024-07-14 21:22:39.559442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.559492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.559544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.559690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.559787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.559842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.559894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.559961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.559993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.560700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.560819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.560843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.563435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.563500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.563564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.563633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.563747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.563813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.563968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.563989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.564018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.564039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.564068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.944 [2024-07-14 21:22:39.564090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.564119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.564140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:31.944 [2024-07-14 21:22:39.564168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.944 [2024-07-14 21:22:39.564219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.564774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.564963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.564984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.565044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.565394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.565444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.565494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.565523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.565544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.569401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.569496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.569553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.569606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.569670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.569736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.569816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.569886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.569939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.569968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.569989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.570054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.570165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.570213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.945 [2024-07-14 21:22:39.570262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.570323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.570371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:31.945 [2024-07-14 21:22:39.570400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.945 [2024-07-14 21:22:39.570436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.570486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.570536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.570586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.570652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.570732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.570797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.570863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.570928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.570959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.570982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.571043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.571093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.571377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.571578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.571628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.571974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.571995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.572045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.572111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.572165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.572214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.572265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.572315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.572365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.572457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.572492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.574882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.574924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.574985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.575014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.575046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.575068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.575098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.946 [2024-07-14 21:22:39.575120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:31.946 [2024-07-14 21:22:39.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.946 [2024-07-14 21:22:39.575172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:31.946 Received shutdown signal, test time was about 34.023251 seconds 00:23:31.946 00:23:31.946 Latency(us) 00:23:31.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.946 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.946 Verification LBA range: start 0x0 length 0x4000 00:23:31.946 Nvme0n1 : 34.02 6238.04 24.37 0.00 0.00 20477.03 945.80 4026531.84 00:23:31.946 =================================================================================================================== 00:23:31.946 Total : 6238.04 24.37 0.00 0.00 20477.03 945.80 4026531.84 00:23:31.946 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.512 rmmod nvme_tcp 00:23:32.512 rmmod nvme_fabrics 00:23:32.512 rmmod nvme_keyring 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 82710 ']' 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 82710 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 82710 ']' 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 82710 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82710 00:23:32.512 killing process with pid 82710 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82710' 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 82710 00:23:32.512 21:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 82710 00:23:33.884 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:33.885 00:23:33.885 real 0m42.164s 00:23:33.885 user 2m14.444s 00:23:33.885 sys 0m11.315s 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.885 21:22:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 ************************************ 00:23:33.885 END TEST nvmf_host_multipath_status 00:23:33.885 ************************************ 00:23:34.143 21:22:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:34.144 21:22:45 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:34.144 21:22:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:34.144 21:22:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.144 21:22:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:34.144 ************************************ 00:23:34.144 START TEST nvmf_discovery_remove_ifc 00:23:34.144 ************************************ 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:34.144 * Looking for test storage... 00:23:34.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:34.144 Cannot find device "nvmf_tgt_br" 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:34.144 Cannot find device "nvmf_tgt_br2" 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:34.144 Cannot find device "nvmf_tgt_br" 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:34.144 Cannot find device "nvmf_tgt_br2" 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:34.144 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:34.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:23:34.403 00:23:34.403 --- 10.0.0.2 ping statistics --- 00:23:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.403 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:34.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:23:34.403 00:23:34.403 --- 10.0.0.3 ping statistics --- 00:23:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.403 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:34.403 00:23:34.403 --- 10.0.0.1 ping statistics --- 00:23:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.403 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.403 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=83561 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 83561 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 83561 ']' 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.662 21:22:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.662 [2024-07-14 21:22:46.083390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:34.662 [2024-07-14 21:22:46.083612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.920 [2024-07-14 21:22:46.267970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.179 [2024-07-14 21:22:46.516066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.179 [2024-07-14 21:22:46.516176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.179 [2024-07-14 21:22:46.516209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.179 [2024-07-14 21:22:46.516227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.179 [2024-07-14 21:22:46.516241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.179 [2024-07-14 21:22:46.516290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.438 [2024-07-14 21:22:46.744574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.697 [2024-07-14 21:22:47.131297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.697 [2024-07-14 21:22:47.139471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:35.697 null0 00:23:35.697 [2024-07-14 21:22:47.171452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83592 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83592 /tmp/host.sock 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 83592 ']' 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:35.697 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.697 21:22:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:35.955 [2024-07-14 21:22:47.318280] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:35.955 [2024-07-14 21:22:47.318452] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83592 ] 00:23:35.955 [2024-07-14 21:22:47.495966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.213 [2024-07-14 21:22:47.740213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.779 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.038 [2024-07-14 21:22:48.489583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:37.296 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.296 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:37.296 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.296 21:22:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.248 [2024-07-14 21:22:49.616726] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.248 [2024-07-14 21:22:49.616807] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.248 [2024-07-14 21:22:49.616841] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.248 [2024-07-14 21:22:49.622801] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:38.248 [2024-07-14 21:22:49.689223] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:38.248 [2024-07-14 21:22:49.689329] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:38.249 [2024-07-14 21:22:49.689425] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:38.249 [2024-07-14 21:22:49.689452] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:38.249 [2024-07-14 21:22:49.689505] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.249 [2024-07-14 21:22:49.695477] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.249 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.507 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:38.507 21:22:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.442 21:22:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.380 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.639 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.639 21:22:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.600 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.601 21:22:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.601 21:22:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.601 21:22:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.540 21:22:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.911 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.911 [2024-07-14 21:22:55.116687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:43.911 [2024-07-14 21:22:55.116958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.911 [2024-07-14 21:22:55.117203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.911 [2024-07-14 21:22:55.117457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.911 [2024-07-14 21:22:55.117481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.911 [2024-07-14 21:22:55.117497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.911 [2024-07-14 21:22:55.117513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.911 [2024-07-14 21:22:55.117527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.911 [2024-07-14 21:22:55.117541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.911 [2024-07-14 21:22:55.117556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.911 [2024-07-14 21:22:55.117570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.912 [2024-07-14 21:22:55.117584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:23:43.912 [2024-07-14 21:22:55.126713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:43.912 [2024-07-14 21:22:55.136716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.912 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.912 21:22:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.847 [2024-07-14 21:22:56.158802] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:44.847 [2024-07-14 21:22:56.158895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:23:44.847 [2024-07-14 21:22:56.158927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:23:44.847 [2024-07-14 21:22:56.158991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:44.847 [2024-07-14 21:22:56.159792] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:44.847 [2024-07-14 21:22:56.159860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:44.847 [2024-07-14 21:22:56.159884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:44.847 [2024-07-14 21:22:56.159912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:44.847 [2024-07-14 21:22:56.159954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.847 [2024-07-14 21:22:56.159975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:44.847 21:22:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:45.782 [2024-07-14 21:22:57.160064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:45.782 [2024-07-14 21:22:57.160145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:45.782 [2024-07-14 21:22:57.160163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:45.782 [2024-07-14 21:22:57.160194] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:45.782 [2024-07-14 21:22:57.160244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.782 [2024-07-14 21:22:57.160319] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:45.782 [2024-07-14 21:22:57.160396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.782 [2024-07-14 21:22:57.160428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.782 [2024-07-14 21:22:57.160475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.782 [2024-07-14 21:22:57.160492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.782 [2024-07-14 21:22:57.160506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.782 [2024-07-14 21:22:57.160520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.782 [2024-07-14 21:22:57.160535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.782 [2024-07-14 21:22:57.160549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.782 [2024-07-14 21:22:57.160564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.782 [2024-07-14 21:22:57.160578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.782 [2024-07-14 21:22:57.160591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:45.782 [2024-07-14 21:22:57.160725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:45.782 [2024-07-14 21:22:57.161894] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:45.782 [2024-07-14 21:22:57.161922] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:45.782 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.783 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.041 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.041 21:22:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:46.977 21:22:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.912 [2024-07-14 21:22:59.173942] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:47.912 [2024-07-14 21:22:59.174004] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:47.912 [2024-07-14 21:22:59.174056] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:47.912 [2024-07-14 21:22:59.180070] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:47.912 [2024-07-14 21:22:59.246429] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:47.912 [2024-07-14 21:22:59.246571] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:47.912 [2024-07-14 21:22:59.246701] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:47.912 [2024-07-14 21:22:59.246729] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:47.912 [2024-07-14 21:22:59.246745] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:47.912 [2024-07-14 21:22:59.253137] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:23:47.912 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.912 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.912 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.912 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.912 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.912 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.913 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.913 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83592 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 83592 ']' 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 83592 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83592 00:23:48.171 killing process with pid 83592 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.171 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.172 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83592' 00:23:48.172 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 83592 00:23:48.172 21:22:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 83592 00:23:49.549 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:49.549 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.549 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.550 rmmod nvme_tcp 00:23:49.550 rmmod nvme_fabrics 00:23:49.550 rmmod nvme_keyring 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 83561 ']' 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 83561 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 83561 ']' 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 83561 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83561 00:23:49.550 killing process with pid 83561 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83561' 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 83561 00:23:49.550 21:23:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 83561 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:50.926 00:23:50.926 real 0m16.872s 00:23:50.926 user 0m28.456s 00:23:50.926 sys 0m2.718s 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.926 ************************************ 00:23:50.926 END TEST nvmf_discovery_remove_ifc 00:23:50.926 ************************************ 00:23:50.926 21:23:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.926 21:23:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.926 21:23:02 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.926 21:23:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.926 21:23:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.926 21:23:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.926 ************************************ 00:23:50.926 START TEST nvmf_identify_kernel_target 00:23:50.926 ************************************ 00:23:50.926 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:50.926 * Looking for test storage... 00:23:50.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:50.927 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:51.185 Cannot find device "nvmf_tgt_br" 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.185 Cannot find device "nvmf_tgt_br2" 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:51.185 Cannot find device "nvmf_tgt_br" 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:51.185 Cannot find device "nvmf_tgt_br2" 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:51.185 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:51.444 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:51.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:51.445 00:23:51.445 --- 10.0.0.2 ping statistics --- 00:23:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.445 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:51.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:51.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:51.445 00:23:51.445 --- 10.0.0.3 ping statistics --- 00:23:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.445 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:51.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:51.445 00:23:51.445 --- 10.0.0.1 ping statistics --- 00:23:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.445 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:51.445 21:23:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:51.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:51.961 Waiting for block devices as requested 00:23:51.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:51.961 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:51.961 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:52.218 No valid GPT data, bailing 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:52.218 No valid GPT data, bailing 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:52.218 No valid GPT data, bailing 00:23:52.218 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:52.219 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:52.476 No valid GPT data, bailing 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -a 10.0.0.1 -t tcp -s 4420 00:23:52.476 00:23:52.476 Discovery Log Number of Records 2, Generation counter 2 00:23:52.476 =====Discovery Log Entry 0====== 00:23:52.476 trtype: tcp 00:23:52.476 adrfam: ipv4 00:23:52.476 subtype: current discovery subsystem 00:23:52.476 treq: not specified, sq flow control disable supported 00:23:52.476 portid: 1 00:23:52.476 trsvcid: 4420 00:23:52.476 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:52.476 traddr: 10.0.0.1 00:23:52.476 eflags: none 00:23:52.476 sectype: none 00:23:52.476 =====Discovery Log Entry 1====== 00:23:52.476 trtype: tcp 00:23:52.476 adrfam: ipv4 00:23:52.476 subtype: nvme subsystem 00:23:52.476 treq: not specified, sq flow control disable supported 00:23:52.476 portid: 1 00:23:52.476 trsvcid: 4420 00:23:52.476 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:52.476 traddr: 10.0.0.1 00:23:52.476 eflags: none 00:23:52.476 sectype: none 00:23:52.476 21:23:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:52.476 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:52.734 ===================================================== 00:23:52.734 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:52.734 ===================================================== 00:23:52.734 Controller Capabilities/Features 00:23:52.734 ================================ 00:23:52.734 Vendor ID: 0000 00:23:52.734 Subsystem Vendor ID: 0000 00:23:52.734 Serial Number: 1af8e96bd5a3d9ca8b2c 00:23:52.734 Model Number: Linux 00:23:52.734 Firmware Version: 6.7.0-68 00:23:52.734 Recommended Arb Burst: 0 00:23:52.734 IEEE OUI Identifier: 00 00 00 00:23:52.734 Multi-path I/O 00:23:52.734 May have multiple subsystem ports: No 00:23:52.734 May have multiple controllers: No 00:23:52.734 Associated with SR-IOV VF: No 00:23:52.734 Max Data Transfer Size: Unlimited 00:23:52.734 Max Number of Namespaces: 0 00:23:52.734 Max Number of I/O Queues: 1024 00:23:52.734 NVMe Specification Version (VS): 1.3 00:23:52.734 NVMe Specification Version (Identify): 1.3 00:23:52.734 Maximum Queue Entries: 1024 00:23:52.734 Contiguous Queues Required: No 00:23:52.734 Arbitration Mechanisms Supported 00:23:52.734 Weighted Round Robin: Not Supported 00:23:52.734 Vendor Specific: Not Supported 00:23:52.734 Reset Timeout: 7500 ms 00:23:52.734 Doorbell Stride: 4 bytes 00:23:52.734 NVM Subsystem Reset: Not Supported 00:23:52.734 Command Sets Supported 00:23:52.734 NVM Command Set: Supported 00:23:52.734 Boot Partition: Not Supported 00:23:52.734 Memory Page Size Minimum: 4096 bytes 00:23:52.734 Memory Page Size Maximum: 4096 bytes 00:23:52.734 Persistent Memory Region: Not Supported 00:23:52.734 Optional Asynchronous Events Supported 00:23:52.734 Namespace Attribute Notices: Not Supported 00:23:52.734 Firmware Activation Notices: Not Supported 00:23:52.734 ANA Change Notices: Not Supported 00:23:52.734 PLE Aggregate Log Change Notices: Not Supported 00:23:52.734 LBA Status Info Alert Notices: Not Supported 00:23:52.734 EGE Aggregate Log Change Notices: Not Supported 00:23:52.734 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.734 Zone Descriptor Change Notices: Not Supported 00:23:52.734 Discovery Log Change Notices: Supported 00:23:52.734 Controller Attributes 00:23:52.734 128-bit Host Identifier: Not Supported 00:23:52.734 Non-Operational Permissive Mode: Not Supported 00:23:52.734 NVM Sets: Not Supported 00:23:52.734 Read Recovery Levels: Not Supported 00:23:52.734 Endurance Groups: Not Supported 00:23:52.734 Predictable Latency Mode: Not Supported 00:23:52.734 Traffic Based Keep ALive: Not Supported 00:23:52.734 Namespace Granularity: Not Supported 00:23:52.734 SQ Associations: Not Supported 00:23:52.734 UUID List: Not Supported 00:23:52.734 Multi-Domain Subsystem: Not Supported 00:23:52.734 Fixed Capacity Management: Not Supported 00:23:52.734 Variable Capacity Management: Not Supported 00:23:52.734 Delete Endurance Group: Not Supported 00:23:52.734 Delete NVM Set: Not Supported 00:23:52.734 Extended LBA Formats Supported: Not Supported 00:23:52.734 Flexible Data Placement Supported: Not Supported 00:23:52.734 00:23:52.734 Controller Memory Buffer Support 00:23:52.734 ================================ 00:23:52.734 Supported: No 00:23:52.734 00:23:52.734 Persistent Memory Region Support 00:23:52.734 ================================ 00:23:52.734 Supported: No 00:23:52.734 00:23:52.734 Admin Command Set Attributes 00:23:52.734 ============================ 00:23:52.734 Security Send/Receive: Not Supported 00:23:52.734 Format NVM: Not Supported 00:23:52.734 Firmware Activate/Download: Not Supported 00:23:52.734 Namespace Management: Not Supported 00:23:52.734 Device Self-Test: Not Supported 00:23:52.734 Directives: Not Supported 00:23:52.734 NVMe-MI: Not Supported 00:23:52.734 Virtualization Management: Not Supported 00:23:52.734 Doorbell Buffer Config: Not Supported 00:23:52.734 Get LBA Status Capability: Not Supported 00:23:52.734 Command & Feature Lockdown Capability: Not Supported 00:23:52.734 Abort Command Limit: 1 00:23:52.734 Async Event Request Limit: 1 00:23:52.734 Number of Firmware Slots: N/A 00:23:52.734 Firmware Slot 1 Read-Only: N/A 00:23:52.734 Firmware Activation Without Reset: N/A 00:23:52.734 Multiple Update Detection Support: N/A 00:23:52.734 Firmware Update Granularity: No Information Provided 00:23:52.734 Per-Namespace SMART Log: No 00:23:52.734 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.734 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:52.734 Command Effects Log Page: Not Supported 00:23:52.734 Get Log Page Extended Data: Supported 00:23:52.734 Telemetry Log Pages: Not Supported 00:23:52.734 Persistent Event Log Pages: Not Supported 00:23:52.734 Supported Log Pages Log Page: May Support 00:23:52.734 Commands Supported & Effects Log Page: Not Supported 00:23:52.734 Feature Identifiers & Effects Log Page:May Support 00:23:52.734 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.734 Data Area 4 for Telemetry Log: Not Supported 00:23:52.734 Error Log Page Entries Supported: 1 00:23:52.734 Keep Alive: Not Supported 00:23:52.734 00:23:52.734 NVM Command Set Attributes 00:23:52.734 ========================== 00:23:52.734 Submission Queue Entry Size 00:23:52.734 Max: 1 00:23:52.734 Min: 1 00:23:52.734 Completion Queue Entry Size 00:23:52.734 Max: 1 00:23:52.734 Min: 1 00:23:52.734 Number of Namespaces: 0 00:23:52.734 Compare Command: Not Supported 00:23:52.734 Write Uncorrectable Command: Not Supported 00:23:52.734 Dataset Management Command: Not Supported 00:23:52.734 Write Zeroes Command: Not Supported 00:23:52.734 Set Features Save Field: Not Supported 00:23:52.734 Reservations: Not Supported 00:23:52.734 Timestamp: Not Supported 00:23:52.734 Copy: Not Supported 00:23:52.734 Volatile Write Cache: Not Present 00:23:52.734 Atomic Write Unit (Normal): 1 00:23:52.734 Atomic Write Unit (PFail): 1 00:23:52.734 Atomic Compare & Write Unit: 1 00:23:52.734 Fused Compare & Write: Not Supported 00:23:52.734 Scatter-Gather List 00:23:52.734 SGL Command Set: Supported 00:23:52.734 SGL Keyed: Not Supported 00:23:52.734 SGL Bit Bucket Descriptor: Not Supported 00:23:52.734 SGL Metadata Pointer: Not Supported 00:23:52.734 Oversized SGL: Not Supported 00:23:52.734 SGL Metadata Address: Not Supported 00:23:52.734 SGL Offset: Supported 00:23:52.734 Transport SGL Data Block: Not Supported 00:23:52.734 Replay Protected Memory Block: Not Supported 00:23:52.734 00:23:52.734 Firmware Slot Information 00:23:52.734 ========================= 00:23:52.734 Active slot: 0 00:23:52.734 00:23:52.734 00:23:52.734 Error Log 00:23:52.734 ========= 00:23:52.734 00:23:52.734 Active Namespaces 00:23:52.734 ================= 00:23:52.734 Discovery Log Page 00:23:52.734 ================== 00:23:52.735 Generation Counter: 2 00:23:52.735 Number of Records: 2 00:23:52.735 Record Format: 0 00:23:52.735 00:23:52.735 Discovery Log Entry 0 00:23:52.735 ---------------------- 00:23:52.735 Transport Type: 3 (TCP) 00:23:52.735 Address Family: 1 (IPv4) 00:23:52.735 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:52.735 Entry Flags: 00:23:52.735 Duplicate Returned Information: 0 00:23:52.735 Explicit Persistent Connection Support for Discovery: 0 00:23:52.735 Transport Requirements: 00:23:52.735 Secure Channel: Not Specified 00:23:52.735 Port ID: 1 (0x0001) 00:23:52.735 Controller ID: 65535 (0xffff) 00:23:52.735 Admin Max SQ Size: 32 00:23:52.735 Transport Service Identifier: 4420 00:23:52.735 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:52.735 Transport Address: 10.0.0.1 00:23:52.735 Discovery Log Entry 1 00:23:52.735 ---------------------- 00:23:52.735 Transport Type: 3 (TCP) 00:23:52.735 Address Family: 1 (IPv4) 00:23:52.735 Subsystem Type: 2 (NVM Subsystem) 00:23:52.735 Entry Flags: 00:23:52.735 Duplicate Returned Information: 0 00:23:52.735 Explicit Persistent Connection Support for Discovery: 0 00:23:52.735 Transport Requirements: 00:23:52.735 Secure Channel: Not Specified 00:23:52.735 Port ID: 1 (0x0001) 00:23:52.735 Controller ID: 65535 (0xffff) 00:23:52.735 Admin Max SQ Size: 32 00:23:52.735 Transport Service Identifier: 4420 00:23:52.735 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:52.735 Transport Address: 10.0.0.1 00:23:52.735 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:52.994 get_feature(0x01) failed 00:23:52.994 get_feature(0x02) failed 00:23:52.994 get_feature(0x04) failed 00:23:52.994 ===================================================== 00:23:52.994 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:52.994 ===================================================== 00:23:52.994 Controller Capabilities/Features 00:23:52.994 ================================ 00:23:52.994 Vendor ID: 0000 00:23:52.994 Subsystem Vendor ID: 0000 00:23:52.994 Serial Number: bf6d7c8cf21652bedc71 00:23:52.994 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:52.994 Firmware Version: 6.7.0-68 00:23:52.994 Recommended Arb Burst: 6 00:23:52.994 IEEE OUI Identifier: 00 00 00 00:23:52.994 Multi-path I/O 00:23:52.994 May have multiple subsystem ports: Yes 00:23:52.994 May have multiple controllers: Yes 00:23:52.994 Associated with SR-IOV VF: No 00:23:52.994 Max Data Transfer Size: Unlimited 00:23:52.994 Max Number of Namespaces: 1024 00:23:52.994 Max Number of I/O Queues: 128 00:23:52.994 NVMe Specification Version (VS): 1.3 00:23:52.994 NVMe Specification Version (Identify): 1.3 00:23:52.994 Maximum Queue Entries: 1024 00:23:52.994 Contiguous Queues Required: No 00:23:52.994 Arbitration Mechanisms Supported 00:23:52.994 Weighted Round Robin: Not Supported 00:23:52.994 Vendor Specific: Not Supported 00:23:52.994 Reset Timeout: 7500 ms 00:23:52.994 Doorbell Stride: 4 bytes 00:23:52.994 NVM Subsystem Reset: Not Supported 00:23:52.994 Command Sets Supported 00:23:52.994 NVM Command Set: Supported 00:23:52.994 Boot Partition: Not Supported 00:23:52.994 Memory Page Size Minimum: 4096 bytes 00:23:52.994 Memory Page Size Maximum: 4096 bytes 00:23:52.994 Persistent Memory Region: Not Supported 00:23:52.994 Optional Asynchronous Events Supported 00:23:52.994 Namespace Attribute Notices: Supported 00:23:52.994 Firmware Activation Notices: Not Supported 00:23:52.994 ANA Change Notices: Supported 00:23:52.994 PLE Aggregate Log Change Notices: Not Supported 00:23:52.994 LBA Status Info Alert Notices: Not Supported 00:23:52.994 EGE Aggregate Log Change Notices: Not Supported 00:23:52.994 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.994 Zone Descriptor Change Notices: Not Supported 00:23:52.994 Discovery Log Change Notices: Not Supported 00:23:52.994 Controller Attributes 00:23:52.994 128-bit Host Identifier: Supported 00:23:52.994 Non-Operational Permissive Mode: Not Supported 00:23:52.994 NVM Sets: Not Supported 00:23:52.994 Read Recovery Levels: Not Supported 00:23:52.994 Endurance Groups: Not Supported 00:23:52.994 Predictable Latency Mode: Not Supported 00:23:52.994 Traffic Based Keep ALive: Supported 00:23:52.994 Namespace Granularity: Not Supported 00:23:52.994 SQ Associations: Not Supported 00:23:52.994 UUID List: Not Supported 00:23:52.994 Multi-Domain Subsystem: Not Supported 00:23:52.994 Fixed Capacity Management: Not Supported 00:23:52.994 Variable Capacity Management: Not Supported 00:23:52.994 Delete Endurance Group: Not Supported 00:23:52.994 Delete NVM Set: Not Supported 00:23:52.994 Extended LBA Formats Supported: Not Supported 00:23:52.994 Flexible Data Placement Supported: Not Supported 00:23:52.994 00:23:52.994 Controller Memory Buffer Support 00:23:52.994 ================================ 00:23:52.994 Supported: No 00:23:52.994 00:23:52.994 Persistent Memory Region Support 00:23:52.994 ================================ 00:23:52.994 Supported: No 00:23:52.994 00:23:52.994 Admin Command Set Attributes 00:23:52.994 ============================ 00:23:52.994 Security Send/Receive: Not Supported 00:23:52.994 Format NVM: Not Supported 00:23:52.994 Firmware Activate/Download: Not Supported 00:23:52.994 Namespace Management: Not Supported 00:23:52.994 Device Self-Test: Not Supported 00:23:52.994 Directives: Not Supported 00:23:52.994 NVMe-MI: Not Supported 00:23:52.994 Virtualization Management: Not Supported 00:23:52.994 Doorbell Buffer Config: Not Supported 00:23:52.994 Get LBA Status Capability: Not Supported 00:23:52.994 Command & Feature Lockdown Capability: Not Supported 00:23:52.994 Abort Command Limit: 4 00:23:52.994 Async Event Request Limit: 4 00:23:52.994 Number of Firmware Slots: N/A 00:23:52.994 Firmware Slot 1 Read-Only: N/A 00:23:52.994 Firmware Activation Without Reset: N/A 00:23:52.994 Multiple Update Detection Support: N/A 00:23:52.994 Firmware Update Granularity: No Information Provided 00:23:52.994 Per-Namespace SMART Log: Yes 00:23:52.994 Asymmetric Namespace Access Log Page: Supported 00:23:52.994 ANA Transition Time : 10 sec 00:23:52.994 00:23:52.994 Asymmetric Namespace Access Capabilities 00:23:52.994 ANA Optimized State : Supported 00:23:52.994 ANA Non-Optimized State : Supported 00:23:52.994 ANA Inaccessible State : Supported 00:23:52.994 ANA Persistent Loss State : Supported 00:23:52.994 ANA Change State : Supported 00:23:52.994 ANAGRPID is not changed : No 00:23:52.994 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:52.994 00:23:52.994 ANA Group Identifier Maximum : 128 00:23:52.994 Number of ANA Group Identifiers : 128 00:23:52.994 Max Number of Allowed Namespaces : 1024 00:23:52.994 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:52.994 Command Effects Log Page: Supported 00:23:52.994 Get Log Page Extended Data: Supported 00:23:52.994 Telemetry Log Pages: Not Supported 00:23:52.994 Persistent Event Log Pages: Not Supported 00:23:52.994 Supported Log Pages Log Page: May Support 00:23:52.994 Commands Supported & Effects Log Page: Not Supported 00:23:52.994 Feature Identifiers & Effects Log Page:May Support 00:23:52.994 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.994 Data Area 4 for Telemetry Log: Not Supported 00:23:52.994 Error Log Page Entries Supported: 128 00:23:52.994 Keep Alive: Supported 00:23:52.994 Keep Alive Granularity: 1000 ms 00:23:52.994 00:23:52.994 NVM Command Set Attributes 00:23:52.994 ========================== 00:23:52.994 Submission Queue Entry Size 00:23:52.994 Max: 64 00:23:52.994 Min: 64 00:23:52.994 Completion Queue Entry Size 00:23:52.994 Max: 16 00:23:52.994 Min: 16 00:23:52.994 Number of Namespaces: 1024 00:23:52.994 Compare Command: Not Supported 00:23:52.994 Write Uncorrectable Command: Not Supported 00:23:52.994 Dataset Management Command: Supported 00:23:52.994 Write Zeroes Command: Supported 00:23:52.994 Set Features Save Field: Not Supported 00:23:52.994 Reservations: Not Supported 00:23:52.994 Timestamp: Not Supported 00:23:52.994 Copy: Not Supported 00:23:52.994 Volatile Write Cache: Present 00:23:52.994 Atomic Write Unit (Normal): 1 00:23:52.994 Atomic Write Unit (PFail): 1 00:23:52.994 Atomic Compare & Write Unit: 1 00:23:52.994 Fused Compare & Write: Not Supported 00:23:52.994 Scatter-Gather List 00:23:52.994 SGL Command Set: Supported 00:23:52.994 SGL Keyed: Not Supported 00:23:52.994 SGL Bit Bucket Descriptor: Not Supported 00:23:52.994 SGL Metadata Pointer: Not Supported 00:23:52.994 Oversized SGL: Not Supported 00:23:52.994 SGL Metadata Address: Not Supported 00:23:52.994 SGL Offset: Supported 00:23:52.994 Transport SGL Data Block: Not Supported 00:23:52.994 Replay Protected Memory Block: Not Supported 00:23:52.994 00:23:52.994 Firmware Slot Information 00:23:52.994 ========================= 00:23:52.994 Active slot: 0 00:23:52.994 00:23:52.994 Asymmetric Namespace Access 00:23:52.994 =========================== 00:23:52.994 Change Count : 0 00:23:52.994 Number of ANA Group Descriptors : 1 00:23:52.994 ANA Group Descriptor : 0 00:23:52.994 ANA Group ID : 1 00:23:52.994 Number of NSID Values : 1 00:23:52.994 Change Count : 0 00:23:52.994 ANA State : 1 00:23:52.994 Namespace Identifier : 1 00:23:52.994 00:23:52.994 Commands Supported and Effects 00:23:52.994 ============================== 00:23:52.994 Admin Commands 00:23:52.994 -------------- 00:23:52.994 Get Log Page (02h): Supported 00:23:52.994 Identify (06h): Supported 00:23:52.994 Abort (08h): Supported 00:23:52.994 Set Features (09h): Supported 00:23:52.994 Get Features (0Ah): Supported 00:23:52.994 Asynchronous Event Request (0Ch): Supported 00:23:52.994 Keep Alive (18h): Supported 00:23:52.994 I/O Commands 00:23:52.994 ------------ 00:23:52.994 Flush (00h): Supported 00:23:52.994 Write (01h): Supported LBA-Change 00:23:52.994 Read (02h): Supported 00:23:52.994 Write Zeroes (08h): Supported LBA-Change 00:23:52.995 Dataset Management (09h): Supported 00:23:52.995 00:23:52.995 Error Log 00:23:52.995 ========= 00:23:52.995 Entry: 0 00:23:52.995 Error Count: 0x3 00:23:52.995 Submission Queue Id: 0x0 00:23:52.995 Command Id: 0x5 00:23:52.995 Phase Bit: 0 00:23:52.995 Status Code: 0x2 00:23:52.995 Status Code Type: 0x0 00:23:52.995 Do Not Retry: 1 00:23:52.995 Error Location: 0x28 00:23:52.995 LBA: 0x0 00:23:52.995 Namespace: 0x0 00:23:52.995 Vendor Log Page: 0x0 00:23:52.995 ----------- 00:23:52.995 Entry: 1 00:23:52.995 Error Count: 0x2 00:23:52.995 Submission Queue Id: 0x0 00:23:52.995 Command Id: 0x5 00:23:52.995 Phase Bit: 0 00:23:52.995 Status Code: 0x2 00:23:52.995 Status Code Type: 0x0 00:23:52.995 Do Not Retry: 1 00:23:52.995 Error Location: 0x28 00:23:52.995 LBA: 0x0 00:23:52.995 Namespace: 0x0 00:23:52.995 Vendor Log Page: 0x0 00:23:52.995 ----------- 00:23:52.995 Entry: 2 00:23:52.995 Error Count: 0x1 00:23:52.995 Submission Queue Id: 0x0 00:23:52.995 Command Id: 0x4 00:23:52.995 Phase Bit: 0 00:23:52.995 Status Code: 0x2 00:23:52.995 Status Code Type: 0x0 00:23:52.995 Do Not Retry: 1 00:23:52.995 Error Location: 0x28 00:23:52.995 LBA: 0x0 00:23:52.995 Namespace: 0x0 00:23:52.995 Vendor Log Page: 0x0 00:23:52.995 00:23:52.995 Number of Queues 00:23:52.995 ================ 00:23:52.995 Number of I/O Submission Queues: 128 00:23:52.995 Number of I/O Completion Queues: 128 00:23:52.995 00:23:52.995 ZNS Specific Controller Data 00:23:52.995 ============================ 00:23:52.995 Zone Append Size Limit: 0 00:23:52.995 00:23:52.995 00:23:52.995 Active Namespaces 00:23:52.995 ================= 00:23:52.995 get_feature(0x05) failed 00:23:52.995 Namespace ID:1 00:23:52.995 Command Set Identifier: NVM (00h) 00:23:52.995 Deallocate: Supported 00:23:52.995 Deallocated/Unwritten Error: Not Supported 00:23:52.995 Deallocated Read Value: Unknown 00:23:52.995 Deallocate in Write Zeroes: Not Supported 00:23:52.995 Deallocated Guard Field: 0xFFFF 00:23:52.995 Flush: Supported 00:23:52.995 Reservation: Not Supported 00:23:52.995 Namespace Sharing Capabilities: Multiple Controllers 00:23:52.995 Size (in LBAs): 1310720 (5GiB) 00:23:52.995 Capacity (in LBAs): 1310720 (5GiB) 00:23:52.995 Utilization (in LBAs): 1310720 (5GiB) 00:23:52.995 UUID: 61e842cb-ff89-414e-a849-005c8c3c5267 00:23:52.995 Thin Provisioning: Not Supported 00:23:52.995 Per-NS Atomic Units: Yes 00:23:52.995 Atomic Boundary Size (Normal): 0 00:23:52.995 Atomic Boundary Size (PFail): 0 00:23:52.995 Atomic Boundary Offset: 0 00:23:52.995 NGUID/EUI64 Never Reused: No 00:23:52.995 ANA group ID: 1 00:23:52.995 Namespace Write Protected: No 00:23:52.995 Number of LBA Formats: 1 00:23:52.995 Current LBA Format: LBA Format #00 00:23:52.995 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:52.995 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.995 rmmod nvme_tcp 00:23:52.995 rmmod nvme_fabrics 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.995 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:53.253 21:23:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:53.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:54.076 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:54.076 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:54.076 00:23:54.076 real 0m3.162s 00:23:54.076 user 0m1.133s 00:23:54.076 sys 0m1.505s 00:23:54.076 21:23:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:54.076 21:23:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.076 ************************************ 00:23:54.076 END TEST nvmf_identify_kernel_target 00:23:54.076 ************************************ 00:23:54.076 21:23:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:54.076 21:23:05 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:54.076 21:23:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:54.076 21:23:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.076 21:23:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:54.076 ************************************ 00:23:54.076 START TEST nvmf_auth_host 00:23:54.076 ************************************ 00:23:54.076 21:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:54.334 * Looking for test storage... 00:23:54.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.334 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:54.335 Cannot find device "nvmf_tgt_br" 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:54.335 Cannot find device "nvmf_tgt_br2" 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:54.335 Cannot find device "nvmf_tgt_br" 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:54.335 Cannot find device "nvmf_tgt_br2" 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:54.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:54.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:54.335 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:54.593 21:23:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:54.593 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:54.593 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:54.593 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:54.593 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:54.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:23:54.594 00:23:54.594 --- 10.0.0.2 ping statistics --- 00:23:54.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.594 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:54.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:54.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:54.594 00:23:54.594 --- 10.0.0.3 ping statistics --- 00:23:54.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.594 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:54.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:54.594 00:23:54.594 --- 10.0.0.1 ping statistics --- 00:23:54.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.594 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=84513 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 84513 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 84513 ']' 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.594 21:23:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e94a18d6de686f26b8cf6f11f7ff4c9f 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Bfl 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e94a18d6de686f26b8cf6f11f7ff4c9f 0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e94a18d6de686f26b8cf6f11f7ff4c9f 0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e94a18d6de686f26b8cf6f11f7ff4c9f 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Bfl 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Bfl 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Bfl 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4c582b0a815cba4727e854e4b8ca7d7dc2ae4984b61e42a4f7a3cd0a194d4e0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UIn 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4c582b0a815cba4727e854e4b8ca7d7dc2ae4984b61e42a4f7a3cd0a194d4e0 3 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4c582b0a815cba4727e854e4b8ca7d7dc2ae4984b61e42a4f7a3cd0a194d4e0 3 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4c582b0a815cba4727e854e4b8ca7d7dc2ae4984b61e42a4f7a3cd0a194d4e0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UIn 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UIn 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.UIn 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=001c5d40e6e3eabbfa3f73f3884aac1d6d5988dae21b3478 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.C8x 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 001c5d40e6e3eabbfa3f73f3884aac1d6d5988dae21b3478 0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 001c5d40e6e3eabbfa3f73f3884aac1d6d5988dae21b3478 0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=001c5d40e6e3eabbfa3f73f3884aac1d6d5988dae21b3478 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.C8x 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.C8x 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.C8x 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=459b3e46b1fdcb75d54987f916a11937601310ccd6aae4dc 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.q8m 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 459b3e46b1fdcb75d54987f916a11937601310ccd6aae4dc 2 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 459b3e46b1fdcb75d54987f916a11937601310ccd6aae4dc 2 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=459b3e46b1fdcb75d54987f916a11937601310ccd6aae4dc 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.q8m 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.q8m 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.q8m 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a29827e7d4bca7ca9761cc1559527503 00:23:55.968 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iZM 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a29827e7d4bca7ca9761cc1559527503 1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a29827e7d4bca7ca9761cc1559527503 1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a29827e7d4bca7ca9761cc1559527503 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iZM 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iZM 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.iZM 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ec7b59c9f69282987e3137fca5b2a8ae 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eYk 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ec7b59c9f69282987e3137fca5b2a8ae 1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ec7b59c9f69282987e3137fca5b2a8ae 1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:56.227 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ec7b59c9f69282987e3137fca5b2a8ae 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eYk 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eYk 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.eYk 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1341cac7d2becdcd9462953e529bbe7e7dacf573346c75c9 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2cB 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1341cac7d2becdcd9462953e529bbe7e7dacf573346c75c9 2 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1341cac7d2becdcd9462953e529bbe7e7dacf573346c75c9 2 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1341cac7d2becdcd9462953e529bbe7e7dacf573346c75c9 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2cB 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2cB 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2cB 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=91dca877dd45761e007ef542a38824b6 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sZk 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 91dca877dd45761e007ef542a38824b6 0 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 91dca877dd45761e007ef542a38824b6 0 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=91dca877dd45761e007ef542a38824b6 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:56.228 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sZk 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sZk 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sZk 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d852e306727753f66f44666893c4f74fdb06c6f25fd940c684fb0aefbbe07cf1 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cD3 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d852e306727753f66f44666893c4f74fdb06c6f25fd940c684fb0aefbbe07cf1 3 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d852e306727753f66f44666893c4f74fdb06c6f25fd940c684fb0aefbbe07cf1 3 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d852e306727753f66f44666893c4f74fdb06c6f25fd940c684fb0aefbbe07cf1 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cD3 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cD3 00:23:56.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cD3 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84513 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 84513 ']' 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.487 21:23:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bfl 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.UIn ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UIn 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.C8x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.q8m ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q8m 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.iZM 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.eYk ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eYk 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2cB 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sZk ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sZk 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cD3 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.747 21:23:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:57.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:57.313 Waiting for block devices as requested 00:23:57.313 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:57.313 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:57.938 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:57.938 No valid GPT data, bailing 00:23:57.939 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:58.197 No valid GPT data, bailing 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:58.197 No valid GPT data, bailing 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:58.197 No valid GPT data, bailing 00:23:58.197 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -a 10.0.0.1 -t tcp -s 4420 00:23:58.456 00:23:58.456 Discovery Log Number of Records 2, Generation counter 2 00:23:58.456 =====Discovery Log Entry 0====== 00:23:58.456 trtype: tcp 00:23:58.456 adrfam: ipv4 00:23:58.456 subtype: current discovery subsystem 00:23:58.456 treq: not specified, sq flow control disable supported 00:23:58.456 portid: 1 00:23:58.456 trsvcid: 4420 00:23:58.456 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:58.456 traddr: 10.0.0.1 00:23:58.456 eflags: none 00:23:58.456 sectype: none 00:23:58.456 =====Discovery Log Entry 1====== 00:23:58.456 trtype: tcp 00:23:58.456 adrfam: ipv4 00:23:58.456 subtype: nvme subsystem 00:23:58.456 treq: not specified, sq flow control disable supported 00:23:58.456 portid: 1 00:23:58.456 trsvcid: 4420 00:23:58.456 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:58.456 traddr: 10.0.0.1 00:23:58.456 eflags: none 00:23:58.456 sectype: none 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.456 21:23:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.715 nvme0n1 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.715 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.716 nvme0n1 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.716 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.974 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.975 nvme0n1 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.975 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.234 nvme0n1 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.234 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.235 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 nvme0n1 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.494 21:23:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 nvme0n1 00:23:59.494 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.494 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.494 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.494 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.494 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.751 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.008 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:00.008 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:00.008 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.009 nvme0n1 00:24:00.009 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 nvme0n1 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.526 nvme0n1 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.526 21:23:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.526 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.784 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.784 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.784 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.784 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.784 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.785 nvme0n1 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.785 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.052 nvme0n1 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.052 21:23:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.986 nvme0n1 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.986 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.987 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.244 nvme0n1 00:24:02.244 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.244 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.244 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.244 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.244 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.244 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.503 21:23:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.761 nvme0n1 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.761 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.019 nvme0n1 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.019 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.277 nvme0n1 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.277 21:23:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.175 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:05.175 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:05.175 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:05.175 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:05.175 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.175 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.176 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.176 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.176 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.176 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.176 21:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.176 21:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.434 21:23:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.693 nvme0n1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.693 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 nvme0n1 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.260 21:23:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.261 21:23:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.261 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.261 21:23:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.827 nvme0n1 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.827 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.828 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.395 nvme0n1 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.395 21:23:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.654 nvme0n1 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.654 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.913 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.914 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.914 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.914 21:23:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.914 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.914 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.914 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.484 nvme0n1 00:24:08.484 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.484 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.484 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.484 21:23:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.484 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.484 21:23:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.484 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.745 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.308 nvme0n1 00:24:09.308 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.308 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.309 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.566 21:23:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.130 nvme0n1 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.130 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.387 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.388 21:23:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.953 nvme0n1 00:24:10.953 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.953 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.953 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.953 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.953 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.953 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.211 21:23:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.777 nvme0n1 00:24:11.777 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.777 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.777 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.777 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.777 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.777 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.037 nvme0n1 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.037 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.038 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.298 nvme0n1 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.298 nvme0n1 00:24:12.298 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.557 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.557 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.557 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.557 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.557 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.558 21:23:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.558 nvme0n1 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.558 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 nvme0n1 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.817 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.075 nvme0n1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.075 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 nvme0n1 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.332 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.590 nvme0n1 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.590 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.591 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.591 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.591 21:23:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.591 21:23:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.591 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.591 21:23:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.591 nvme0n1 00:24:13.591 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.591 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.591 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.591 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.591 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.591 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 nvme0n1 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.107 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 nvme0n1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.366 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 nvme0n1 00:24:14.626 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.626 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.626 21:23:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.626 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.626 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 21:23:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.626 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 nvme0n1 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.885 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.143 nvme0n1 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.144 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.402 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.403 nvme0n1 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.403 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.662 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.662 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.662 21:23:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.662 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.662 21:23:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.662 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.921 nvme0n1 00:24:15.921 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.921 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.921 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.921 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.921 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.921 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.201 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.460 nvme0n1 00:24:16.460 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.460 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.460 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.460 21:23:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.460 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.460 21:23:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.719 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.720 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.978 nvme0n1 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.978 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.237 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.496 nvme0n1 00:24:17.496 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.496 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.496 21:23:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.496 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.496 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.496 21:23:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.496 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.496 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.496 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.496 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.754 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.755 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.013 nvme0n1 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.013 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.014 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.272 21:23:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 nvme0n1 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.839 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.098 21:23:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.665 nvme0n1 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.665 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.924 21:23:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.924 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.924 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.924 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.491 nvme0n1 00:24:20.491 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.491 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.491 21:23:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.491 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.491 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.491 21:23:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.491 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.492 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.430 nvme0n1 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.430 21:23:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 nvme0n1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 nvme0n1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.365 21:23:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.623 nvme0n1 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.623 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 nvme0n1 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 nvme0n1 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 nvme0n1 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.138 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.396 nvme0n1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.396 21:23:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 nvme0n1 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.654 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.655 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 nvme0n1 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.913 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.172 nvme0n1 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.172 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.431 nvme0n1 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.431 21:23:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.690 nvme0n1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.690 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.949 nvme0n1 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.949 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.209 nvme0n1 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.209 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.468 nvme0n1 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.468 21:23:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.468 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.727 nvme0n1 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.727 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.027 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 nvme0n1 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:26.290 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.291 21:23:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.570 nvme0n1 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.570 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.841 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.100 nvme0n1 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.100 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.101 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.360 nvme0n1 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.360 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.618 21:23:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.877 nvme0n1 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk0YTE4ZDZkZTY4NmYyNmI4Y2Y2ZjExZjdmZjRjOWa7S/ne: 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzRjNTgyYjBhODE1Y2JhNDcyN2U4NTRlNGI4Y2E3ZDdkYzJhZTQ5ODRiNjFlNDJhNGY3YTNjZDBhMTk0ZDRlMH6cD7w=: 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.877 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.446 nvme0n1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.446 21:23:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.014 nvme0n1 00:24:29.014 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.014 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.014 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.014 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.014 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:29.273 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTI5ODI3ZTdkNGJjYTdjYTk3NjFjYzE1NTk1Mjc1MDO58T1b: 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: ]] 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWM3YjU5YzlmNjkyODI5ODdlMzEzN2ZjYTViMmE4YWWmz3VL: 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.274 21:23:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.840 nvme0n1 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTM0MWNhYzdkMmJlY2RjZDk0NjI5NTNlNTI5YmJlN2U3ZGFjZjU3MzM0NmM3NWM5V5IFDg==: 00:24:29.840 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: ]] 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTFkY2E4NzdkZDQ1NzYxZTAwN2VmNTQyYTM4ODI0Yjb3+ulX: 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.841 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.407 nvme0n1 00:24:30.407 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.407 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.407 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.407 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.408 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.408 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.408 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.408 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.408 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.408 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDg1MmUzMDY3Mjc3NTNmNjZmNDQ2NjY4OTNjNGY3NGZkYjA2YzZmMjVmZDk0MGM2ODRmYjBhZWZiYmUwN2NmMaTVdVw=: 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.666 21:23:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.234 nvme0n1 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDAxYzVkNDBlNmUzZWFiYmZhM2Y3M2YzODg0YWFjMWQ2ZDU5ODhkYWUyMWIzNDc4+KEtoQ==: 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDU5YjNlNDZiMWZkY2I3NWQ1NDk4N2Y5MTZhMTE5Mzc2MDEzMTBjY2Q2YWFlNGRjT7hRkg==: 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.234 request: 00:24:31.234 { 00:24:31.234 "name": "nvme0", 00:24:31.234 "trtype": "tcp", 00:24:31.234 "traddr": "10.0.0.1", 00:24:31.234 "adrfam": "ipv4", 00:24:31.234 "trsvcid": "4420", 00:24:31.234 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:31.234 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:31.234 "prchk_reftag": false, 00:24:31.234 "prchk_guard": false, 00:24:31.234 "hdgst": false, 00:24:31.234 "ddgst": false, 00:24:31.234 "method": "bdev_nvme_attach_controller", 00:24:31.234 "req_id": 1 00:24:31.234 } 00:24:31.234 Got JSON-RPC error response 00:24:31.234 response: 00:24:31.234 { 00:24:31.234 "code": -5, 00:24:31.234 "message": "Input/output error" 00:24:31.234 } 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:31.234 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.235 request: 00:24:31.235 { 00:24:31.235 "name": "nvme0", 00:24:31.235 "trtype": "tcp", 00:24:31.235 "traddr": "10.0.0.1", 00:24:31.235 "adrfam": "ipv4", 00:24:31.235 "trsvcid": "4420", 00:24:31.235 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:31.235 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:31.235 "prchk_reftag": false, 00:24:31.235 "prchk_guard": false, 00:24:31.235 "hdgst": false, 00:24:31.235 "ddgst": false, 00:24:31.235 "dhchap_key": "key2", 00:24:31.235 "method": "bdev_nvme_attach_controller", 00:24:31.235 "req_id": 1 00:24:31.235 } 00:24:31.235 Got JSON-RPC error response 00:24:31.235 response: 00:24:31.235 { 00:24:31.235 "code": -5, 00:24:31.235 "message": "Input/output error" 00:24:31.235 } 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.235 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.493 request: 00:24:31.493 { 00:24:31.493 "name": "nvme0", 00:24:31.493 "trtype": "tcp", 00:24:31.493 "traddr": "10.0.0.1", 00:24:31.493 "adrfam": "ipv4", 00:24:31.493 "trsvcid": "4420", 00:24:31.493 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:31.493 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:31.493 "prchk_reftag": false, 00:24:31.493 "prchk_guard": false, 00:24:31.493 "hdgst": false, 00:24:31.493 "ddgst": false, 00:24:31.493 "dhchap_key": "key1", 00:24:31.493 "dhchap_ctrlr_key": "ckey2", 00:24:31.493 "method": "bdev_nvme_attach_controller", 00:24:31.493 "req_id": 1 00:24:31.493 } 00:24:31.493 Got JSON-RPC error response 00:24:31.493 response: 00:24:31.493 { 00:24:31.493 "code": -5, 00:24:31.493 "message": "Input/output error" 00:24:31.493 } 00:24:31.493 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.494 rmmod nvme_tcp 00:24:31.494 rmmod nvme_fabrics 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 84513 ']' 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 84513 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 84513 ']' 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 84513 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84513 00:24:31.494 killing process with pid 84513 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84513' 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 84513 00:24:31.494 21:23:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 84513 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:32.427 21:23:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:32.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:33.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:33.250 21:23:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Bfl /tmp/spdk.key-null.C8x /tmp/spdk.key-sha256.iZM /tmp/spdk.key-sha384.2cB /tmp/spdk.key-sha512.cD3 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:33.250 21:23:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:33.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.766 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:33.766 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:33.766 ************************************ 00:24:33.766 END TEST nvmf_auth_host 00:24:33.766 ************************************ 00:24:33.766 00:24:33.766 real 0m39.506s 00:24:33.766 user 0m34.646s 00:24:33.766 sys 0m4.169s 00:24:33.766 21:23:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.766 21:23:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.766 21:23:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:33.766 21:23:45 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:33.766 21:23:45 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:33.766 21:23:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:33.766 21:23:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.766 21:23:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.766 ************************************ 00:24:33.766 START TEST nvmf_digest 00:24:33.766 ************************************ 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:33.766 * Looking for test storage... 00:24:33.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:33.766 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:33.767 Cannot find device "nvmf_tgt_br" 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.767 Cannot find device "nvmf_tgt_br2" 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:24:33.767 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:34.025 Cannot find device "nvmf_tgt_br" 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:34.025 Cannot find device "nvmf_tgt_br2" 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:34.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:34.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:34.025 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:34.283 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:34.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:24:34.284 00:24:34.284 --- 10.0.0.2 ping statistics --- 00:24:34.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.284 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:34.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:34.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:24:34.284 00:24:34.284 --- 10.0.0.3 ping statistics --- 00:24:34.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.284 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:34.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:34.284 00:24:34.284 --- 10.0.0.1 ping statistics --- 00:24:34.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.284 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.284 ************************************ 00:24:34.284 START TEST nvmf_digest_clean 00:24:34.284 ************************************ 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=86117 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 86117 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86117 ']' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.284 21:23:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.284 [2024-07-14 21:23:45.756365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:34.284 [2024-07-14 21:23:45.756862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.541 [2024-07-14 21:23:45.934747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.799 [2024-07-14 21:23:46.165362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.800 [2024-07-14 21:23:46.165459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.800 [2024-07-14 21:23:46.165492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.800 [2024-07-14 21:23:46.165521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.800 [2024-07-14 21:23:46.165536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.800 [2024-07-14 21:23:46.165585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.365 21:23:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:35.623 [2024-07-14 21:23:46.951775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:35.623 null0 00:24:35.623 [2024-07-14 21:23:47.062776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.623 [2024-07-14 21:23:47.086939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86155 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86155 /var/tmp/bperf.sock 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86155 ']' 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:35.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.623 21:23:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:35.881 [2024-07-14 21:23:47.204450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:35.881 [2024-07-14 21:23:47.204913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86155 ] 00:24:35.881 [2024-07-14 21:23:47.377647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.139 [2024-07-14 21:23:47.610522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.705 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.705 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:36.705 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:36.705 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:36.705 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:36.964 [2024-07-14 21:23:48.466164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:37.222 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.222 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.481 nvme0n1 00:24:37.481 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:37.481 21:23:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:37.481 Running I/O for 2 seconds... 00:24:40.014 00:24:40.014 Latency(us) 00:24:40.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.014 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:40.014 nvme0n1 : 2.01 13274.16 51.85 0.00 0.00 9634.57 8817.57 27167.65 00:24:40.014 =================================================================================================================== 00:24:40.014 Total : 13274.16 51.85 0.00 0.00 9634.57 8817.57 27167.65 00:24:40.014 0 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:40.014 | select(.opcode=="crc32c") 00:24:40.014 | "\(.module_name) \(.executed)"' 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:40.014 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86155 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86155 ']' 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86155 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86155 00:24:40.015 killing process with pid 86155 00:24:40.015 Received shutdown signal, test time was about 2.000000 seconds 00:24:40.015 00:24:40.015 Latency(us) 00:24:40.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.015 =================================================================================================================== 00:24:40.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86155' 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86155 00:24:40.015 21:23:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86155 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86223 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86223 /var/tmp/bperf.sock 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86223 ']' 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:40.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.962 21:23:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:40.962 [2024-07-14 21:23:52.372332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:40.962 [2024-07-14 21:23:52.372726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86223 ] 00:24:40.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:40.962 Zero copy mechanism will not be used. 00:24:41.261 [2024-07-14 21:23:52.547655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.261 [2024-07-14 21:23:52.721192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.827 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.827 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:41.827 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:41.827 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:41.827 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:42.393 [2024-07-14 21:23:53.670654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:42.393 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.393 21:23:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.651 nvme0n1 00:24:42.651 21:23:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:42.651 21:23:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:42.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:42.651 Zero copy mechanism will not be used. 00:24:42.651 Running I/O for 2 seconds... 00:24:45.182 00:24:45.182 Latency(us) 00:24:45.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.182 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:45.182 nvme0n1 : 2.00 6268.62 783.58 0.00 0.00 2548.19 2263.97 4200.26 00:24:45.182 =================================================================================================================== 00:24:45.182 Total : 6268.62 783.58 0.00 0.00 2548.19 2263.97 4200.26 00:24:45.182 0 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:45.182 | select(.opcode=="crc32c") 00:24:45.182 | "\(.module_name) \(.executed)"' 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86223 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86223 ']' 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86223 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86223 00:24:45.182 killing process with pid 86223 00:24:45.182 Received shutdown signal, test time was about 2.000000 seconds 00:24:45.182 00:24:45.182 Latency(us) 00:24:45.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.182 =================================================================================================================== 00:24:45.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86223' 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86223 00:24:45.182 21:23:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86223 00:24:46.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86291 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86291 /var/tmp/bperf.sock 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86291 ']' 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.117 21:23:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.117 [2024-07-14 21:23:57.654856] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:46.117 [2024-07-14 21:23:57.655325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86291 ] 00:24:46.375 [2024-07-14 21:23:57.823578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.632 [2024-07-14 21:23:57.999501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.197 21:23:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.197 21:23:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:47.197 21:23:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:47.197 21:23:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:47.197 21:23:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.763 [2024-07-14 21:23:59.006335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:47.763 21:23:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.763 21:23:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.021 nvme0n1 00:24:48.021 21:23:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:48.021 21:23:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.278 Running I/O for 2 seconds... 00:24:50.180 00:24:50.180 Latency(us) 00:24:50.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.180 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:50.180 nvme0n1 : 2.01 11948.16 46.67 0.00 0.00 10701.37 2785.28 20971.52 00:24:50.180 =================================================================================================================== 00:24:50.180 Total : 11948.16 46.67 0.00 0.00 10701.37 2785.28 20971.52 00:24:50.180 0 00:24:50.180 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:50.180 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:50.180 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:50.180 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:50.180 | select(.opcode=="crc32c") 00:24:50.180 | "\(.module_name) \(.executed)"' 00:24:50.180 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86291 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86291 ']' 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86291 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86291 00:24:50.439 killing process with pid 86291 00:24:50.439 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.439 00:24:50.439 Latency(us) 00:24:50.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.439 =================================================================================================================== 00:24:50.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86291' 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86291 00:24:50.439 21:24:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86291 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86358 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86358 /var/tmp/bperf.sock 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86358 ']' 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:51.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.819 21:24:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:51.819 [2024-07-14 21:24:03.117941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:51.819 [2024-07-14 21:24:03.118956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86358 ] 00:24:51.819 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:51.819 Zero copy mechanism will not be used. 00:24:51.819 [2024-07-14 21:24:03.291232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.076 [2024-07-14 21:24:03.494435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.641 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.641 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:52.641 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:52.641 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:52.641 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:53.207 [2024-07-14 21:24:04.489138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:53.207 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.207 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.465 nvme0n1 00:24:53.465 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:53.465 21:24:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:53.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:53.723 Zero copy mechanism will not be used. 00:24:53.723 Running I/O for 2 seconds... 00:24:55.621 00:24:55.621 Latency(us) 00:24:55.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.621 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:55.621 nvme0n1 : 2.00 4598.90 574.86 0.00 0.00 3468.37 3008.70 10962.39 00:24:55.621 =================================================================================================================== 00:24:55.621 Total : 4598.90 574.86 0.00 0.00 3468.37 3008.70 10962.39 00:24:55.621 0 00:24:55.621 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:55.621 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:55.621 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:55.621 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:55.621 | select(.opcode=="crc32c") 00:24:55.621 | "\(.module_name) \(.executed)"' 00:24:55.621 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86358 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86358 ']' 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86358 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86358 00:24:55.879 killing process with pid 86358 00:24:55.879 Received shutdown signal, test time was about 2.000000 seconds 00:24:55.879 00:24:55.879 Latency(us) 00:24:55.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.879 =================================================================================================================== 00:24:55.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86358' 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86358 00:24:55.879 21:24:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86358 00:24:57.270 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86117 00:24:57.270 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86117 ']' 00:24:57.270 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86117 00:24:57.270 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:57.270 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.271 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86117 00:24:57.271 killing process with pid 86117 00:24:57.271 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:57.271 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:57.271 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86117' 00:24:57.271 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86117 00:24:57.271 21:24:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86117 00:24:58.659 00:24:58.659 real 0m24.169s 00:24:58.659 user 0m46.134s 00:24:58.659 sys 0m4.565s 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.659 ************************************ 00:24:58.659 END TEST nvmf_digest_clean 00:24:58.659 ************************************ 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:58.659 ************************************ 00:24:58.659 START TEST nvmf_digest_error 00:24:58.659 ************************************ 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=86466 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 86466 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86466 ']' 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.659 21:24:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.659 [2024-07-14 21:24:09.957631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:58.659 [2024-07-14 21:24:09.957809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.659 [2024-07-14 21:24:10.131383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.918 [2024-07-14 21:24:10.341079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.918 [2024-07-14 21:24:10.341161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.918 [2024-07-14 21:24:10.341191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.918 [2024-07-14 21:24:10.341204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.918 [2024-07-14 21:24:10.341214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.918 [2024-07-14 21:24:10.341248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.486 [2024-07-14 21:24:10.930400] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.486 21:24:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.745 [2024-07-14 21:24:11.132547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:59.745 null0 00:24:59.745 [2024-07-14 21:24:11.250753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.745 [2024-07-14 21:24:11.274940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86504 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86504 /var/tmp/bperf.sock 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86504 ']' 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.745 21:24:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:00.002 [2024-07-14 21:24:11.415009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:00.002 [2024-07-14 21:24:11.415248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86504 ] 00:25:00.259 [2024-07-14 21:24:11.597200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.259 [2024-07-14 21:24:11.808273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.517 [2024-07-14 21:24:11.997105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.083 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.650 nvme0n1 00:25:01.650 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:01.650 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.650 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.650 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.650 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:01.650 21:24:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.650 Running I/O for 2 seconds... 00:25:01.650 [2024-07-14 21:24:13.108131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.650 [2024-07-14 21:24:13.108204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.650 [2024-07-14 21:24:13.108262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.650 [2024-07-14 21:24:13.130640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.650 [2024-07-14 21:24:13.130704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.650 [2024-07-14 21:24:13.130728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.650 [2024-07-14 21:24:13.153435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.650 [2024-07-14 21:24:13.153551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.650 [2024-07-14 21:24:13.153591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.650 [2024-07-14 21:24:13.175949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.650 [2024-07-14 21:24:13.176025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.650 [2024-07-14 21:24:13.176048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.198830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.198896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.198922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.221871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.221975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.221998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.244834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.244904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.244930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.267749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.267848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.267887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.291324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.291379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.291405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.314149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.314213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.314235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.337184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.337252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.337278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.359599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.359690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.359728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.382443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.382511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.382553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.404814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.404874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.404897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.427325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.427376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.427399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.926 [2024-07-14 21:24:13.449958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.926 [2024-07-14 21:24:13.450061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.926 [2024-07-14 21:24:13.450085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.188 [2024-07-14 21:24:13.472948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.188 [2024-07-14 21:24:13.473067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.188 [2024-07-14 21:24:13.473093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.188 [2024-07-14 21:24:13.496348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.188 [2024-07-14 21:24:13.496411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.188 [2024-07-14 21:24:13.496433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.188 [2024-07-14 21:24:13.519105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.188 [2024-07-14 21:24:13.519175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.188 [2024-07-14 21:24:13.519201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.188 [2024-07-14 21:24:13.542044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.188 [2024-07-14 21:24:13.542150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.188 [2024-07-14 21:24:13.542172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.188 [2024-07-14 21:24:13.564879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.564944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.564985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.587322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.587396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.587419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.610232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.610292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.610317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.632543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.632603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.632626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.655593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.655678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.655708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.678495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.678605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.678658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.701464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.701531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.701574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.189 [2024-07-14 21:24:13.724418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.189 [2024-07-14 21:24:13.724485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.189 [2024-07-14 21:24:13.724509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.446 [2024-07-14 21:24:13.747491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.747590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.747618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.770423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.770511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.770533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.793809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.793936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.793962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.816335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.816450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.816512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.838514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.838606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.838649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.861310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.861369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.861391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.883502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.883585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.883611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.906320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.906395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.906417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.928727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.928793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.928822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.951705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.951787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.951810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.447 [2024-07-14 21:24:13.974058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.447 [2024-07-14 21:24:13.974109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.447 [2024-07-14 21:24:13.974150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:13.996984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:13.997070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:13.997093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.020027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.020078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.020102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.042493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.042585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.042624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.064994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.065078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.065103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.087065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.087170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.087192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.109203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.109250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.109276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.131798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.131908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.131931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.154128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.154195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.154221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.177149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.177241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.177263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.200696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.200771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.200798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.224240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.224328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.224351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.705 [2024-07-14 21:24:14.247425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.705 [2024-07-14 21:24:14.247508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.705 [2024-07-14 21:24:14.247533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.270241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.270328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.270351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.292997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.293085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.293110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.316427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.316509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.316533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.339368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.339467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.339492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.363563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.363623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.363646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.386841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.386908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.386934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.963 [2024-07-14 21:24:14.409715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.963 [2024-07-14 21:24:14.409786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.963 [2024-07-14 21:24:14.409825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.964 [2024-07-14 21:24:14.432647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.964 [2024-07-14 21:24:14.432701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.964 [2024-07-14 21:24:14.432727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.964 [2024-07-14 21:24:14.455812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.964 [2024-07-14 21:24:14.455926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.964 [2024-07-14 21:24:14.455949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.964 [2024-07-14 21:24:14.478655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.964 [2024-07-14 21:24:14.478707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.964 [2024-07-14 21:24:14.478751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.964 [2024-07-14 21:24:14.501606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.964 [2024-07-14 21:24:14.501681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.964 [2024-07-14 21:24:14.501704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.525325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.525377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.525402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.558329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.558429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.558474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.581602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.581692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.581728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.604401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.604509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.604536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.627114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.627175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.627197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.650145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.650213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.650254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.673276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.673365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.673388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.695807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.695875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.221 [2024-07-14 21:24:14.695901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.221 [2024-07-14 21:24:14.718341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.221 [2024-07-14 21:24:14.718445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.222 [2024-07-14 21:24:14.718467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.222 [2024-07-14 21:24:14.741159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.222 [2024-07-14 21:24:14.741211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.222 [2024-07-14 21:24:14.741252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.222 [2024-07-14 21:24:14.763830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.222 [2024-07-14 21:24:14.763920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.222 [2024-07-14 21:24:14.763942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.786544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.786610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.786635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.809066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.809123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.809144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.831740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.831817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.854529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.854588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.854610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.877400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.877455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.877481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.900638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.900699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.900721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.923645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.923695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.923741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.946395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.946505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.946527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.968798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.968863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.968904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:14.991344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:14.991403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:14.991424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.480 [2024-07-14 21:24:15.014254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.480 [2024-07-14 21:24:15.014338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.480 [2024-07-14 21:24:15.014363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.738 [2024-07-14 21:24:15.037003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.738 [2024-07-14 21:24:15.037061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.738 [2024-07-14 21:24:15.037084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.738 [2024-07-14 21:24:15.059888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.738 [2024-07-14 21:24:15.059954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.738 [2024-07-14 21:24:15.060011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.738 [2024-07-14 21:24:15.082605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.738 [2024-07-14 21:24:15.082657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.738 [2024-07-14 21:24:15.082695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.738 00:25:03.738 Latency(us) 00:25:03.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.738 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:03.738 nvme0n1 : 2.01 11082.06 43.29 0.00 0.00 11540.35 10783.65 44564.48 00:25:03.738 =================================================================================================================== 00:25:03.738 Total : 11082.06 43.29 0.00 0.00 11540.35 10783.65 44564.48 00:25:03.738 0 00:25:03.738 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:03.738 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:03.738 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:03.738 | .driver_specific 00:25:03.738 | .nvme_error 00:25:03.739 | .status_code 00:25:03.739 | .command_transient_transport_error' 00:25:03.739 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 87 > 0 )) 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86504 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86504 ']' 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86504 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86504 00:25:03.996 killing process with pid 86504 00:25:03.996 Received shutdown signal, test time was about 2.000000 seconds 00:25:03.996 00:25:03.996 Latency(us) 00:25:03.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.996 =================================================================================================================== 00:25:03.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86504' 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86504 00:25:03.996 21:24:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86504 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86572 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86572 /var/tmp/bperf.sock 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86572 ']' 00:25:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.931 21:24:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.189 [2024-07-14 21:24:16.582080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:05.189 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.189 Zero copy mechanism will not be used. 00:25:05.189 [2024-07-14 21:24:16.582305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86572 ] 00:25:05.448 [2024-07-14 21:24:16.753576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.448 [2024-07-14 21:24:16.952112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.706 [2024-07-14 21:24:17.143073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:05.965 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.965 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:05.965 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:05.965 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.531 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:06.531 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.532 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.532 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.532 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.532 21:24:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.791 nvme0n1 00:25:06.791 21:24:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:06.791 21:24:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.791 21:24:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.791 21:24:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.791 21:24:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:06.791 21:24:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:06.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.791 Zero copy mechanism will not be used. 00:25:06.791 Running I/O for 2 seconds... 00:25:06.791 [2024-07-14 21:24:18.255457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.256053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.256209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.262388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.262563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.262670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.268719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.268882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.268991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.275016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.275173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.275308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.281215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.281371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.281473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.287275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.287450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.287568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.293364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.293524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.293690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.299535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.299707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.299853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.305853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.306039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.306158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.312331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.312554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.312588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.318480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.318539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.318562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.324359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.324434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.324457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.330271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.330321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.330346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.791 [2024-07-14 21:24:18.336380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.791 [2024-07-14 21:24:18.336427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.791 [2024-07-14 21:24:18.336466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.342282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.342340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.342362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.348410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.348487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.348529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.354242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.354331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.360167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.360229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.366124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.366247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.366271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.372061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.372117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.372140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.377884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.051 [2024-07-14 21:24:18.377990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.051 [2024-07-14 21:24:18.383667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.051 [2024-07-14 21:24:18.383713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.383736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.389809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.389871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.389896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.395908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.395987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.401895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.401949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.401971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.407551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.407655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.407677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.413427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.413476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.413504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.419645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.419725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.419765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.425948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.425996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.426021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.431915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.431998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.432020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.437782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.437864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.437903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.443552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.443600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.443622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.449339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.449430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.449470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.454986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.455057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.455079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.460975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.461034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.461057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.466727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.466814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.466836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.472359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.472439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.472463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.478057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.478123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.478148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.483852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.483927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.483949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.489476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.489548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.489586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.495335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.495415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.495439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.501436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.501532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.501556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.507200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.507266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.507291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.513028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.513084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.513122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.518753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.518827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.518865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.524647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.524697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.524722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.530501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.530550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.530589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.536332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.536396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.536419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.542255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.542313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.542336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.548148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.052 [2024-07-14 21:24:18.548201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.052 [2024-07-14 21:24:18.548222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.052 [2024-07-14 21:24:18.553927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.553972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.554012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.559685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.559763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.559799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.565417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.565474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.565496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.570893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.570951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.570972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.576612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.576668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.576690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.582437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.582534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.582560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.588175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.588251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.588274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.053 [2024-07-14 21:24:18.594042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.053 [2024-07-14 21:24:18.594130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.053 [2024-07-14 21:24:18.594152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.600089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.311 [2024-07-14 21:24:18.600144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.311 [2024-07-14 21:24:18.600166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.605714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.311 [2024-07-14 21:24:18.605793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.311 [2024-07-14 21:24:18.605821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.611347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.311 [2024-07-14 21:24:18.611394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.311 [2024-07-14 21:24:18.611417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.617274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.311 [2024-07-14 21:24:18.617320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.311 [2024-07-14 21:24:18.617343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.622954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.311 [2024-07-14 21:24:18.623027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.311 [2024-07-14 21:24:18.623050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.628721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.311 [2024-07-14 21:24:18.628792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.311 [2024-07-14 21:24:18.628815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.311 [2024-07-14 21:24:18.634276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.634342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.634366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.639840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.639919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.639943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.645783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.645883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.645907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.651609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.651682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.651704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.657301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.657357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.657378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.663017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.663067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.663093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.668966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.669030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.669054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.674737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.674828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.674852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.680565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.680623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.680646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.686401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.686449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.686473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.692316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.692365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.692389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.697946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.698012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.698042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.703704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.703796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.703821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.709490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.709550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.709588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.715157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.715221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.715245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.721030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.721093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.721131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.726934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.727009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.727032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.732864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.732973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.738715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.738799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.738822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.744633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.744682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.744708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.750363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.750427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.750454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.756090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.756208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.756230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.761955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.762072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.762107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.767713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.767805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.767845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.773716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.773778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.773836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.779675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.779739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.779791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.785434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.785511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.785534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.791181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.791243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.791267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.312 [2024-07-14 21:24:18.797133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.312 [2024-07-14 21:24:18.797182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.312 [2024-07-14 21:24:18.797206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.803073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.803138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.803166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.809020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.809068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.809091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.814812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.814914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.814937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.820868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.820938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.820960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.826533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.826582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.826606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.832423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.832511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.838393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.838458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.838483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.844037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.844110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.844132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.849793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.849879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.849902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.313 [2024-07-14 21:24:18.855814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.313 [2024-07-14 21:24:18.855904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.313 [2024-07-14 21:24:18.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.862090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.862137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.862163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.867818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.867906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.867932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.873793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.873860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.873883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.879494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.879552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.879573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.885459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.885506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.885529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.891436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.891502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.891541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.897624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.897697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.897750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.903434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.903510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.903533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.909625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.909702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.909725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.915603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.915669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.915694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.921668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.921717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.921742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.927630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.927721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.927758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.933800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.933901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.933939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.939725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.939806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.939828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.945856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.945914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.945969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.951746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.951847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.957569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.957640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.957664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.963185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.963243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.963264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.969032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.969121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.969142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.974767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.974825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.974845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.980592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.980642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.980663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.986582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.986632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.986652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.992287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.992335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.992355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:18.998465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:18.998529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:18.998549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:19.004437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:19.004512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:19.004534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:19.010586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:19.010682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:19.010703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:19.016686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:19.016737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:19.016776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.572 [2024-07-14 21:24:19.022732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.572 [2024-07-14 21:24:19.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.572 [2024-07-14 21:24:19.022861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.028329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.028408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.028428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.034227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.034277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.034329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.039934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.039997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.040019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.045579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.045661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.045682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.051441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.051507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.051528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.057105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.057182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.057216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.062911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.062975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.063000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.068694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.068746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.068786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.074334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.074399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.074421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.080125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.080203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.080224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.086081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.086144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.086165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.091883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.091975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.091996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.097707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.097767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.097805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.103524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.103589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.103611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.109318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.109366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.109403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.573 [2024-07-14 21:24:19.115010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.573 [2024-07-14 21:24:19.115056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.573 [2024-07-14 21:24:19.115076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.832 [2024-07-14 21:24:19.121128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.832 [2024-07-14 21:24:19.121176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.832 [2024-07-14 21:24:19.121197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.832 [2024-07-14 21:24:19.127179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.832 [2024-07-14 21:24:19.127259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.832 [2024-07-14 21:24:19.127294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.832 [2024-07-14 21:24:19.133137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.832 [2024-07-14 21:24:19.133187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.832 [2024-07-14 21:24:19.133208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.832 [2024-07-14 21:24:19.138882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.832 [2024-07-14 21:24:19.138945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.832 [2024-07-14 21:24:19.138965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.832 [2024-07-14 21:24:19.144670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.832 [2024-07-14 21:24:19.144720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.832 [2024-07-14 21:24:19.144741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.150556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.150635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.150654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.156530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.156579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.156605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.162262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.162324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.162343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.168014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.168064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.168085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.174135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.174215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.174252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.179791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.179852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.179881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.185771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.185860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.185880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.191568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.191634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.191655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.197296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.197360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.197380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.202969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.203032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.203052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.208872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.208932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.208954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.214648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.214697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.214718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.220334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.220382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.220404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.226137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.226218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.226252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.232183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.232260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.232294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.238142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.238207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.238242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.244084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.244132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.244153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.250045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.250122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.250141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.255994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.256058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.256080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.262193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.262256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.262275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.268102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.268167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.268187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.274162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.274210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.274229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.279900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.279995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.280017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.285994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.286040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.286075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.292115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.292209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.292230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.297848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.297924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.297945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.303523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.303588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.303610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.309609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.309672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.833 [2024-07-14 21:24:19.309709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.833 [2024-07-14 21:24:19.315417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.833 [2024-07-14 21:24:19.315482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.315504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.321471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.321520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.327077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.327124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.327144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.333043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.333109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.333130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.338841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.338922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.338943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.344465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.344525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.344547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.350104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.350168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.350217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.355648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.355696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.355749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.361402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.361481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.361501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.367319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.367369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.367390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.373159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.373223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.373244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.834 [2024-07-14 21:24:19.378938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.834 [2024-07-14 21:24:19.378987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.834 [2024-07-14 21:24:19.379008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.384599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.384648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.384669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.390420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.390500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.390521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.396605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.396655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.402712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.402822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.408784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.408833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.408853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.414656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.414706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.414727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.420317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.420368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.420389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.425980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.426037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.426061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.431948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.432002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.432024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.437687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.437744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.437790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.443965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.444049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.450354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.450467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.450504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.456851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.456950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.456972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.462986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.463055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.463077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.468937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.469004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.469042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.474676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.474731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.474771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.480566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.480623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.480645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.486269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.486340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.486362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.492104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.492179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.497817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.497892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.497914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.503729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.503790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.503828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.094 [2024-07-14 21:24:19.509707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.094 [2024-07-14 21:24:19.509770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.094 [2024-07-14 21:24:19.509792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.515925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.515995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.516018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.521772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.521874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.521898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.527784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.527849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.533418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.533489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.533511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.539089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.539158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.539179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.544994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.545076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.545127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.550926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.550993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.551045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.556706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.556776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.556809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.562482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.562538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.562560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.568434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.568497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.568520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.574324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.574377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.574398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.579945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.579999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.580021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.585515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.585601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.585624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.591406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.591455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.591478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.597319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.597386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.597422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.603296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.603363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.603401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.609206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.609292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.609314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.615060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.615112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.615132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.620917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.620969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.620990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.626507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.626564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.626611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.632364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.632433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.632454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.095 [2024-07-14 21:24:19.638167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.095 [2024-07-14 21:24:19.638222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.095 [2024-07-14 21:24:19.638244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.644090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.644160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.644197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.650049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.650105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.650127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.655695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.655765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.655788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.661532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.661633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.661655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.667501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.667553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.673234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.673302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.673324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.679005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.679058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.679081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.684706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.684775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.684799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.690465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.690535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.690556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.696174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.696226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.696267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.701743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.701829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.701852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.707391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.707461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.707497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.713339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.713440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.713464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.718936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.719015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.719064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.724597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.724676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.730368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.730425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.730447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.736299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.736351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.736372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.742199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.742253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.742306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.747892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.747961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.355 [2024-07-14 21:24:19.747982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.355 [2024-07-14 21:24:19.753376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.355 [2024-07-14 21:24:19.753444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.753466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.759085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.759137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.759173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.764931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.764983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.765004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.770697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.770790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.770816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.776276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.776329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.776350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.782071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.782139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.782176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.788095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.788148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.794125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.794176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.794197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.799939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.800008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.805688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.805741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.805782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.811347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.811417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.811438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.817198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.817283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.817322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.823075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.823142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.823179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.828956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.829029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.834428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.834481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.834501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.840101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.840152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.840172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.846171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.846256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.846279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.851876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.851960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.851981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.857666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.857737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.857773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.863392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.863461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.863483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.869203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.869291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.875325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.875393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.875415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.881032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.881087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.886712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.886776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.886799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.892260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.892315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.892345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.356 [2024-07-14 21:24:19.898030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.356 [2024-07-14 21:24:19.898081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.356 [2024-07-14 21:24:19.898119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.904107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.904160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.904198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.909660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.909713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.909735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.915394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.915461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.915482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.920947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.921016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.921037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.926648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.926731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.926753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.933124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.933181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.933203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.939161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.939232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.939290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.945029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.945173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.945211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.950802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.950855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.950878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.956779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.956829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.956850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.962830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.962950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.962974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.968964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.969020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.969042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.975067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.975137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.975159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.980930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.980984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.981021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.986738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.986820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.986844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.992243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.992312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.992333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:19.998226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:19.998311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:19.998334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:20.004171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:20.004226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.615 [2024-07-14 21:24:20.004248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.615 [2024-07-14 21:24:20.009944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.615 [2024-07-14 21:24:20.010010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.010033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.015636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.015687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.015708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.021474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.021530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.021553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.027190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.027260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.027296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.033101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.033157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.033180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.039310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.039424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.039446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.045328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.045413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.045450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.051138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.051207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.051229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.056930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.056985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.057007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.063062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.063114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.063150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.068974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.069026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.069079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.074691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.074760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.074816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.080466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.080564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.080588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.086148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.086234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.092000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.092069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.092092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.097825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.097957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.097995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.103648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.103702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.103724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.109479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.109550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.109572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.115367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.115436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.115458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.121300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.121355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.121379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.127127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.127209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.127231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.133147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.133216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.133238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.139136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.139225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.139251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.144806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.144918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.144941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.150655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.150722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.150743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.156423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.156502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.156526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.616 [2024-07-14 21:24:20.162520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.616 [2024-07-14 21:24:20.162590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.616 [2024-07-14 21:24:20.162613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.168529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.168585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.168608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.174555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.174624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.174646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.180556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.180612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.180634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.186373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.186425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.186446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.192289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.192343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.192366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.198315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.198368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.198389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.204202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.204257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.204295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.210074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.210142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.210178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.216058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.216129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.216152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.222099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.222167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.222207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.228143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.228245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.228281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.233982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.234052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.234073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.239608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.239677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.239699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.874 [2024-07-14 21:24:20.245523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:08.874 [2024-07-14 21:24:20.245593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.874 [2024-07-14 21:24:20.245615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.874 00:25:08.874 Latency(us) 00:25:08.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.874 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:08.874 nvme0n1 : 2.00 5275.04 659.38 0.00 0.00 3028.48 2502.28 6881.28 00:25:08.874 =================================================================================================================== 00:25:08.874 Total : 5275.04 659.38 0.00 0.00 3028.48 2502.28 6881.28 00:25:08.874 0 00:25:08.874 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:08.874 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:08.874 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:08.874 | .driver_specific 00:25:08.875 | .nvme_error 00:25:08.875 | .status_code 00:25:08.875 | .command_transient_transport_error' 00:25:08.875 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 340 > 0 )) 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86572 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86572 ']' 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86572 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86572 00:25:09.132 killing process with pid 86572 00:25:09.132 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.132 00:25:09.132 Latency(us) 00:25:09.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.132 =================================================================================================================== 00:25:09.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86572' 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86572 00:25:09.132 21:24:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86572 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86639 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86639 /var/tmp/bperf.sock 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86639 ']' 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.509 21:24:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.509 [2024-07-14 21:24:21.841414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:10.509 [2024-07-14 21:24:21.841870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86639 ] 00:25:10.509 [2024-07-14 21:24:22.018165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.768 [2024-07-14 21:24:22.218145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.026 [2024-07-14 21:24:22.409737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:11.285 21:24:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.285 21:24:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:11.285 21:24:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.285 21:24:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.566 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:11.566 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.566 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.566 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.566 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.566 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.828 nvme0n1 00:25:12.087 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:12.087 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.087 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.087 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.087 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:12.087 21:24:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:12.087 Running I/O for 2 seconds... 00:25:12.087 [2024-07-14 21:24:23.550526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:25:12.087 [2024-07-14 21:24:23.553908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.087 [2024-07-14 21:24:23.553987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.087 [2024-07-14 21:24:23.572054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:25:12.087 [2024-07-14 21:24:23.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.087 [2024-07-14 21:24:23.575455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:12.087 [2024-07-14 21:24:23.593777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:25:12.087 [2024-07-14 21:24:23.597136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.087 [2024-07-14 21:24:23.597206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:12.087 [2024-07-14 21:24:23.615415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:25:12.087 [2024-07-14 21:24:23.618584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.087 [2024-07-14 21:24:23.618652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.637455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:25:12.347 [2024-07-14 21:24:23.640714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.640779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.658613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:25:12.347 [2024-07-14 21:24:23.661936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.661986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.679909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:25:12.347 [2024-07-14 21:24:23.683202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.683250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.701847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:25:12.347 [2024-07-14 21:24:23.705190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.705256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.723246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:25:12.347 [2024-07-14 21:24:23.726522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.726572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.744627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:25:12.347 [2024-07-14 21:24:23.747834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.747882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.766169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:25:12.347 [2024-07-14 21:24:23.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.769545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.787506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:25:12.347 [2024-07-14 21:24:23.790713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.790772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.809376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:25:12.347 [2024-07-14 21:24:23.812462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.812539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.831259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:25:12.347 [2024-07-14 21:24:23.834454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.834532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.853654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:25:12.347 [2024-07-14 21:24:23.856603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.856656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:12.347 [2024-07-14 21:24:23.875038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:25:12.347 [2024-07-14 21:24:23.878137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.347 [2024-07-14 21:24:23.878188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:23.896345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:25:12.607 [2024-07-14 21:24:23.899388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:23.899439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:23.918309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:25:12.607 [2024-07-14 21:24:23.921259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:23.921324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:23.939939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:25:12.607 [2024-07-14 21:24:23.942855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:23.942923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:23.961747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:25:12.607 [2024-07-14 21:24:23.964576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:23.964629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:23.983405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:25:12.607 [2024-07-14 21:24:23.986370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:23.986437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.004924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:25:12.607 [2024-07-14 21:24:24.007721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.007784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.026664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:25:12.607 [2024-07-14 21:24:24.029468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.029518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.048633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:25:12.607 [2024-07-14 21:24:24.051436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.051518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.070805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:25:12.607 [2024-07-14 21:24:24.073599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.073666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.092060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:25:12.607 [2024-07-14 21:24:24.094702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.094781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.113328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:25:12.607 [2024-07-14 21:24:24.116006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.116073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:12.607 [2024-07-14 21:24:24.134406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:25:12.607 [2024-07-14 21:24:24.137129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.607 [2024-07-14 21:24:24.137193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.155679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:25:12.866 [2024-07-14 21:24:24.158273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.158324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.176624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:25:12.866 [2024-07-14 21:24:24.179195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.197673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:25:12.866 [2024-07-14 21:24:24.200288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.200363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.219278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:25:12.866 [2024-07-14 21:24:24.221788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.221849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.240539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:25:12.866 [2024-07-14 21:24:24.243080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.243147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.261648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:25:12.866 [2024-07-14 21:24:24.264070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.264135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.282468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:25:12.866 [2024-07-14 21:24:24.285186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.285235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.303938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:25:12.866 [2024-07-14 21:24:24.306440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.306507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.325290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:25:12.866 [2024-07-14 21:24:24.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.327793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.346319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:25:12.866 [2024-07-14 21:24:24.348779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.348841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.367819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:25:12.866 [2024-07-14 21:24:24.370129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.866 [2024-07-14 21:24:24.370179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:12.866 [2024-07-14 21:24:24.388777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:25:12.867 [2024-07-14 21:24:24.391173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.867 [2024-07-14 21:24:24.391240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:12.867 [2024-07-14 21:24:24.410054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:25:12.867 [2024-07-14 21:24:24.412430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.867 [2024-07-14 21:24:24.412485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.125 [2024-07-14 21:24:24.431768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:25:13.125 [2024-07-14 21:24:24.434165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.125 [2024-07-14 21:24:24.434276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.125 [2024-07-14 21:24:24.453089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:25:13.125 [2024-07-14 21:24:24.455438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.125 [2024-07-14 21:24:24.455488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.125 [2024-07-14 21:24:24.474595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:25:13.125 [2024-07-14 21:24:24.476860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.125 [2024-07-14 21:24:24.476925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.125 [2024-07-14 21:24:24.495919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:25:13.125 [2024-07-14 21:24:24.498197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.125 [2024-07-14 21:24:24.498248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.517485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:25:13.126 [2024-07-14 21:24:24.519589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.519651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.538141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:25:13.126 [2024-07-14 21:24:24.540440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.540512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.560515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:25:13.126 [2024-07-14 21:24:24.562745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.562803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.581398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:25:13.126 [2024-07-14 21:24:24.583682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.583741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.602580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:25:13.126 [2024-07-14 21:24:24.604681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.604733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.623961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:25:13.126 [2024-07-14 21:24:24.626031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.626084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.645603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:25:13.126 [2024-07-14 21:24:24.647665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.647735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.126 [2024-07-14 21:24:24.667554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:25:13.126 [2024-07-14 21:24:24.669612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.126 [2024-07-14 21:24:24.669662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.384 [2024-07-14 21:24:24.689350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:25:13.384 [2024-07-14 21:24:24.691445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.384 [2024-07-14 21:24:24.691508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.384 [2024-07-14 21:24:24.710821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:25:13.384 [2024-07-14 21:24:24.712814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.384 [2024-07-14 21:24:24.712878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.384 [2024-07-14 21:24:24.731937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:25:13.384 [2024-07-14 21:24:24.733936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.384 [2024-07-14 21:24:24.733984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.384 [2024-07-14 21:24:24.753030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:25:13.384 [2024-07-14 21:24:24.755072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.384 [2024-07-14 21:24:24.755161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.384 [2024-07-14 21:24:24.774403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:25:13.384 [2024-07-14 21:24:24.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.384 [2024-07-14 21:24:24.776505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.384 [2024-07-14 21:24:24.795573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:25:13.385 [2024-07-14 21:24:24.797488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.385 [2024-07-14 21:24:24.797586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.385 [2024-07-14 21:24:24.817223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:25:13.385 [2024-07-14 21:24:24.819019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.385 [2024-07-14 21:24:24.819069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.385 [2024-07-14 21:24:24.838380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:25:13.385 [2024-07-14 21:24:24.840291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.385 [2024-07-14 21:24:24.840339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.385 [2024-07-14 21:24:24.859597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:25:13.385 [2024-07-14 21:24:24.861452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.385 [2024-07-14 21:24:24.861500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.385 [2024-07-14 21:24:24.880830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:25:13.385 [2024-07-14 21:24:24.882588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.385 [2024-07-14 21:24:24.882640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.385 [2024-07-14 21:24:24.902059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:25:13.385 [2024-07-14 21:24:24.903857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.385 [2024-07-14 21:24:24.903907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.385 [2024-07-14 21:24:24.932586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:25:13.643 [2024-07-14 21:24:24.935945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.643 [2024-07-14 21:24:24.936004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.643 [2024-07-14 21:24:24.953704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:25:13.643 [2024-07-14 21:24:24.956997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.643 [2024-07-14 21:24:24.957057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:24.974767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:25:13.644 [2024-07-14 21:24:24.977997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:24.978053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:24.995790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:25:13.644 [2024-07-14 21:24:24.999173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:24.999262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.017247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:25:13.644 [2024-07-14 21:24:25.020365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.020438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.038615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:25:13.644 [2024-07-14 21:24:25.041708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.041797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.059828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:25:13.644 [2024-07-14 21:24:25.063170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.063242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.081429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:25:13.644 [2024-07-14 21:24:25.084666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.084730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.103157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:25:13.644 [2024-07-14 21:24:25.106256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.106330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.125021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:25:13.644 [2024-07-14 21:24:25.128058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.128131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.146409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:25:13.644 [2024-07-14 21:24:25.149595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.167979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:25:13.644 [2024-07-14 21:24:25.171120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.644 [2024-07-14 21:24:25.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.644 [2024-07-14 21:24:25.189478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:25:13.903 [2024-07-14 21:24:25.192581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.192641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.210926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:25:13.903 [2024-07-14 21:24:25.214024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.214085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.232649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:25:13.903 [2024-07-14 21:24:25.235661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.235718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.254151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:25:13.903 [2024-07-14 21:24:25.257103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.257166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.275480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:25:13.903 [2024-07-14 21:24:25.278520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.278577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.297448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:25:13.903 [2024-07-14 21:24:25.300462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.300550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.319346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:25:13.903 [2024-07-14 21:24:25.322284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.322390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.340741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:25:13.903 [2024-07-14 21:24:25.343666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.343729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.362247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:25:13.903 [2024-07-14 21:24:25.365025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.365113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.383530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:25:13.903 [2024-07-14 21:24:25.386384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.386457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.404868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:25:13.903 [2024-07-14 21:24:25.407644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.407719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.426067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:25:13.903 [2024-07-14 21:24:25.428765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.428833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.903 [2024-07-14 21:24:25.446863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:25:13.903 [2024-07-14 21:24:25.449692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.903 [2024-07-14 21:24:25.449768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:14.162 [2024-07-14 21:24:25.468268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:25:14.162 [2024-07-14 21:24:25.470852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.162 [2024-07-14 21:24:25.470957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:14.162 [2024-07-14 21:24:25.489316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:25:14.162 [2024-07-14 21:24:25.492100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.162 [2024-07-14 21:24:25.492156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:14.162 [2024-07-14 21:24:25.510079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:25:14.162 [2024-07-14 21:24:25.512632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.162 [2024-07-14 21:24:25.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:14.162 00:25:14.162 Latency(us) 00:25:14.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.162 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:14.162 nvme0n1 : 2.00 11809.21 46.13 0.00 0.00 10827.82 3142.75 42181.35 00:25:14.162 =================================================================================================================== 00:25:14.162 Total : 11809.21 46.13 0.00 0.00 10827.82 3142.75 42181.35 00:25:14.162 0 00:25:14.162 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:14.162 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:14.162 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:14.162 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:14.162 | .driver_specific 00:25:14.162 | .nvme_error 00:25:14.162 | .status_code 00:25:14.162 | .command_transient_transport_error' 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 92 > 0 )) 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86639 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86639 ']' 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86639 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86639 00:25:14.421 killing process with pid 86639 00:25:14.421 Received shutdown signal, test time was about 2.000000 seconds 00:25:14.421 00:25:14.421 Latency(us) 00:25:14.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.421 =================================================================================================================== 00:25:14.421 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86639' 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86639 00:25:14.421 21:24:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86639 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86706 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86706 /var/tmp/bperf.sock 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86706 ']' 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.797 21:24:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.797 [2024-07-14 21:24:27.040983] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:15.797 [2024-07-14 21:24:27.041382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86706 ] 00:25:15.797 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.797 Zero copy mechanism will not be used. 00:25:15.797 [2024-07-14 21:24:27.210039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.056 [2024-07-14 21:24:27.410945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.313 [2024-07-14 21:24:27.606202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:16.571 21:24:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.571 21:24:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:16.571 21:24:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.571 21:24:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.828 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:16.828 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.828 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.828 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.828 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.828 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.085 nvme0n1 00:25:17.085 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:17.086 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.086 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.086 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.086 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:17.086 21:24:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.345 Zero copy mechanism will not be used. 00:25:17.345 Running I/O for 2 seconds... 00:25:17.345 [2024-07-14 21:24:28.743787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.744235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.744295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.751298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.751729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.751787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.758535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.758953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.758999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.765663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.766100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.766143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.772872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.773311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.773361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.780021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.780432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.780473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.787487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.787933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.787996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.795035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.795394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.795497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.802095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.802469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.802511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.809808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.810232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.810283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.817175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.817593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.824517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.824909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.824957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.831877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.832306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.832371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.839265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.839692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.839735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.846670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.847108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.847162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.854054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.854491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.854543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.861198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.861594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.861635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.345 [2024-07-14 21:24:28.868227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.345 [2024-07-14 21:24:28.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.345 [2024-07-14 21:24:28.868663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.346 [2024-07-14 21:24:28.875350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.346 [2024-07-14 21:24:28.875805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.346 [2024-07-14 21:24:28.875869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.346 [2024-07-14 21:24:28.882648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.346 [2024-07-14 21:24:28.883141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.346 [2024-07-14 21:24:28.883190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.346 [2024-07-14 21:24:28.890096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.346 [2024-07-14 21:24:28.890457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.346 [2024-07-14 21:24:28.890506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.605 [2024-07-14 21:24:28.897430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.605 [2024-07-14 21:24:28.897834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.605 [2024-07-14 21:24:28.897922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.605 [2024-07-14 21:24:28.904582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.904968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.905010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.911692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.912172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.912241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.919098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.919446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.919502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.926291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.926677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.926717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.933446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.933820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.933896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.940602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.940998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.941040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.947962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.948387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.948437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.955206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.955585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.955650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.962329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.962726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.962784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.969518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.969937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.976911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.977294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.977343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.984099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.984521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.984562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.991347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.991718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.991799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:28.998385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:28.998809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:28.998861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.005668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.006101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.006173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.012924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.013292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.013342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.020058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.020490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.020530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.027205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.027591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.027632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.034631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.035043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.035100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.042433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.042875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.042929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.049675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.050109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.050161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.056842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.057235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.057285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.064444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.064883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.064938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.071724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.072110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.072161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.078897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.079297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.079338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.086102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.086493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.086549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.093140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.093498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.093558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.100394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.100804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.100845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.606 [2024-07-14 21:24:29.107375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.606 [2024-07-14 21:24:29.107792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.606 [2024-07-14 21:24:29.107842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.607 [2024-07-14 21:24:29.114729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.607 [2024-07-14 21:24:29.115147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.607 [2024-07-14 21:24:29.115206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.607 [2024-07-14 21:24:29.122130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.607 [2024-07-14 21:24:29.122532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.607 [2024-07-14 21:24:29.122571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.607 [2024-07-14 21:24:29.129341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.607 [2024-07-14 21:24:29.129688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.607 [2024-07-14 21:24:29.129749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.607 [2024-07-14 21:24:29.136378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.607 [2024-07-14 21:24:29.136796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.607 [2024-07-14 21:24:29.136874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.607 [2024-07-14 21:24:29.143681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.607 [2024-07-14 21:24:29.144172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.607 [2024-07-14 21:24:29.144245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.607 [2024-07-14 21:24:29.151260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.607 [2024-07-14 21:24:29.151687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.607 [2024-07-14 21:24:29.151737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.158443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.158858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.158921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.166013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.166471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.166530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.173329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.173731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.173789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.180853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.181340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.181381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.188322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.188746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.188802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.195841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.196232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.196283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.203399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.203813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.203868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.210753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.211179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.211236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.217987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.218357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.218422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.225492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.225954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.226023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.232950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.233378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.233444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.240029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.240543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.240584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.247201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.247609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.247648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.254425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.254817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.254920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.261604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.262054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.262114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.268727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.269173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.269223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.275886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.276347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.276395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.282948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.283355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.283393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.290137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.290528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.290576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.296989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.297407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.297455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.304158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.304600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.304642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.311427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.311805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.311868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.318950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.319321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.319372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.326320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.867 [2024-07-14 21:24:29.326697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.867 [2024-07-14 21:24:29.326780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.867 [2024-07-14 21:24:29.333455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.333842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.333907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.340584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.341018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.341094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.347862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.348328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.348405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.355188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.355621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.355671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.362559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.363052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.363109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.369637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.370090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.370152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.376841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.377232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.377285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.383842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.384281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.384320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.391103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.391531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.391592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.398368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.398766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.398844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.405607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.406059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.406104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.868 [2024-07-14 21:24:29.413278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:17.868 [2024-07-14 21:24:29.413645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.868 [2024-07-14 21:24:29.413694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.128 [2024-07-14 21:24:29.420801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.128 [2024-07-14 21:24:29.421206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.128 [2024-07-14 21:24:29.421246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.128 [2024-07-14 21:24:29.427965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.128 [2024-07-14 21:24:29.428390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.128 [2024-07-14 21:24:29.428446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.128 [2024-07-14 21:24:29.435142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.128 [2024-07-14 21:24:29.435602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.128 [2024-07-14 21:24:29.435683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.128 [2024-07-14 21:24:29.442510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.128 [2024-07-14 21:24:29.442955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.128 [2024-07-14 21:24:29.443003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.128 [2024-07-14 21:24:29.449749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.450189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.450230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.456944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.457300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.457349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.464015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.464420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.464500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.471195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.471550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.471613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.478356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.478758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.478816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.485459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.485910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.485952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.492725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.493153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.493211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.499818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.500254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.500319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.507040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.507437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.507477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.514134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.514522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.514586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.521240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.521664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.521721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.528644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.529076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.535820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.536250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.536315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.542991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.543418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.543458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.550420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.550829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.550884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.557558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.557980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.558039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.564616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.565002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.565093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.571688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.572123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.572173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.579146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.579610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.579657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.586452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.586870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.586910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.593627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.594060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.594145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.600910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.601313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.601362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.608032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.608422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.608501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.615044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.615409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.615457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.622409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.622810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.622864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.629535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.629930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.629971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.636686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.637073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.637124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.644038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.644498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.644538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.129 [2024-07-14 21:24:29.651511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.129 [2024-07-14 21:24:29.651955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.129 [2024-07-14 21:24:29.652002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.130 [2024-07-14 21:24:29.658732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.130 [2024-07-14 21:24:29.659129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.130 [2024-07-14 21:24:29.659186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.130 [2024-07-14 21:24:29.665920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.130 [2024-07-14 21:24:29.666325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.130 [2024-07-14 21:24:29.666390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.130 [2024-07-14 21:24:29.672912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.130 [2024-07-14 21:24:29.673322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.130 [2024-07-14 21:24:29.673373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.389 [2024-07-14 21:24:29.680111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.389 [2024-07-14 21:24:29.680556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.389 [2024-07-14 21:24:29.680605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.389 [2024-07-14 21:24:29.687115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.389 [2024-07-14 21:24:29.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.389 [2024-07-14 21:24:29.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.389 [2024-07-14 21:24:29.694061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.389 [2024-07-14 21:24:29.694440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.389 [2024-07-14 21:24:29.694489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.389 [2024-07-14 21:24:29.701043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.389 [2024-07-14 21:24:29.701462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.389 [2024-07-14 21:24:29.701531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.389 [2024-07-14 21:24:29.707921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.389 [2024-07-14 21:24:29.708324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.389 [2024-07-14 21:24:29.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.714992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.715337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.715389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.722044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.722498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.722546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.729340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.729761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.729844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.736509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.736907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.736956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.743427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.743884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.743924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.750889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.751363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.751420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.758205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.758602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.758652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.765437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.765880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.765940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.772653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.773036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.773085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.779890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.780282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.780350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.786933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.787326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.787399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.794150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.794558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.794637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.801258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.801631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.801684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.808228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.808632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.808673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.815235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.815638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.815717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.822357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.822769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.822832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.829468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.829895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.829948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.836617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.837058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.837109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.844141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.844560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.844601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.851119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.851535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.851576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.858531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.858956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.859006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.866183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.866654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.866708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.873667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.874048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.874112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.880918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.881299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.888518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.888910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.895895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.896271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.903008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.903399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.903456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.390 [2024-07-14 21:24:29.910347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.390 [2024-07-14 21:24:29.910724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.390 [2024-07-14 21:24:29.910794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.391 [2024-07-14 21:24:29.917677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.391 [2024-07-14 21:24:29.918098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.391 [2024-07-14 21:24:29.918146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.391 [2024-07-14 21:24:29.925114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.391 [2024-07-14 21:24:29.925519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.391 [2024-07-14 21:24:29.925581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.391 [2024-07-14 21:24:29.932154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.391 [2024-07-14 21:24:29.932562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.391 [2024-07-14 21:24:29.932604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.939625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.940103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.940149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.946919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.947324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.947381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.954084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.954497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.954539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.961305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.961712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.961767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.968507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.968893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.975631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.976035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.976102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.983039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.983408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.983464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.990158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.990594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:29.997229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:29.997681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:29.997722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.004343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.004719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.004771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.011690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.012088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.012133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.019004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.019477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.019541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.026622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.027075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.027120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.033668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.034076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.034124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.041083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.041498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.041553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.049014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.049441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.049479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.056595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.057067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.057115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.064010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.064407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.064456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.651 [2024-07-14 21:24:30.071520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.651 [2024-07-14 21:24:30.071913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.651 [2024-07-14 21:24:30.071957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.078595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.079026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.079115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.085694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.086120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.086167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.092943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.093374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.093412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.100155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.100565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.100606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.107580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.107991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.108032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.114664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.115109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.115155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.122039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.122421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.129072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.129475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.129517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.136159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.136565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.136607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.143528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.143952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.143993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.150936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.151380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.151438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.158167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.158589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.158629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.165359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.165767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.173069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.173503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.173542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.180182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.180621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.180662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.187246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.187621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.187662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.652 [2024-07-14 21:24:30.194211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.652 [2024-07-14 21:24:30.194608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.652 [2024-07-14 21:24:30.194649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.201563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.201987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.202027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.208559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.208941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.208987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.215684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.216144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.216189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.223024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.223388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.223429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.230461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.230863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.230918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.237705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.238124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.238171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.245062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.245453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.245493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.252513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.252956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.912 [2024-07-14 21:24:30.252999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.912 [2024-07-14 21:24:30.259751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.912 [2024-07-14 21:24:30.260141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.260182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.266886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.267275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.267325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.274092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.274494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.274535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.281251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.281674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.281731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.288343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.288743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.288802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.295485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.295920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.295959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.302609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.303036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.303086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.309904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.310306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.310347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.317040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.317422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.317463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.324340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.324713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.324768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.331516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.331939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.331980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.338719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.339171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.339219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.346042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.346459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.346499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.353435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.353807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.353860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.360701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.361185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.361234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.367779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.368178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.368271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.375194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.375557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.375613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.382647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.383154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.389751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.390172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.390215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.396959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.397372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.397412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.404056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.404470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.404536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.411100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.411494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.411534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.418078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.418525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.418564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.425461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.425866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.425935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.432782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.433252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.440028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.440428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.440469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.447361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.447769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.447823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.913 [2024-07-14 21:24:30.454666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:18.913 [2024-07-14 21:24:30.455139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.913 [2024-07-14 21:24:30.455180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.462119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.462540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.462582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.469535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.469983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.470026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.477210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.477593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.477635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.484396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.484799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.484858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.491506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.491887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.491929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.498753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.499165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.499207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.505986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.506370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.173 [2024-07-14 21:24:30.506410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.173 [2024-07-14 21:24:30.513332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.173 [2024-07-14 21:24:30.513751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.513812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.520791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.521279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.528444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.528851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.528892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.535880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.536331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.536372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.543128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.543517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.543558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.550073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.550514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.550555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.557308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.557744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.557803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.564441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.564847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.564904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.571612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.572074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.572123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.578847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.579372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.586148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.586535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.586590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.593330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.593757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.593838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.600464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.600886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.600943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.608114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.608549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.608590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.615140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.615583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.615624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.622301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.622739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.622789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.629402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.629837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.629904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.636713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.637171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.637230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.644193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.644667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.644723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.651067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.651425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.651466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.657924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.658302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.658344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.664823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.665196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.665253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.672422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.672836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.672877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.679942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.680384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.680431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.687861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.688313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.688399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.695240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.695655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.695694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.702451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.702853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.702892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.709492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.709988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.710029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.174 [2024-07-14 21:24:30.716999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.174 [2024-07-14 21:24:30.717439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.174 [2024-07-14 21:24:30.717496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.433 [2024-07-14 21:24:30.724344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.433 [2024-07-14 21:24:30.724737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.433 [2024-07-14 21:24:30.724792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.433 [2024-07-14 21:24:30.731490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:19.433 [2024-07-14 21:24:30.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.433 [2024-07-14 21:24:30.731925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.433 00:25:19.433 Latency(us) 00:25:19.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.433 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:19.433 nvme0n1 : 2.00 4268.91 533.61 0.00 0.00 3737.60 3112.96 7923.90 00:25:19.433 =================================================================================================================== 00:25:19.433 Total : 4268.91 533.61 0.00 0.00 3737.60 3112.96 7923.90 00:25:19.433 0 00:25:19.433 21:24:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:19.433 21:24:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:19.433 21:24:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:19.433 21:24:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:19.433 | .driver_specific 00:25:19.433 | .nvme_error 00:25:19.433 | .status_code 00:25:19.433 | .command_transient_transport_error' 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 275 > 0 )) 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86706 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86706 ']' 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86706 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86706 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86706' 00:25:19.692 killing process with pid 86706 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86706 00:25:19.692 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.692 00:25:19.692 Latency(us) 00:25:19.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.692 =================================================================================================================== 00:25:19.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.692 21:24:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86706 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86466 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86466 ']' 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86466 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86466 00:25:21.069 killing process with pid 86466 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86466' 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86466 00:25:21.069 21:24:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86466 00:25:22.005 00:25:22.005 real 0m23.675s 00:25:22.005 user 0m45.156s 00:25:22.005 sys 0m4.745s 00:25:22.005 21:24:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:22.005 21:24:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.005 ************************************ 00:25:22.005 END TEST nvmf_digest_error 00:25:22.005 ************************************ 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.263 rmmod nvme_tcp 00:25:22.263 rmmod nvme_fabrics 00:25:22.263 rmmod nvme_keyring 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 86466 ']' 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 86466 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 86466 ']' 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 86466 00:25:22.263 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (86466) - No such process 00:25:22.263 Process with pid 86466 is not found 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 86466 is not found' 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:22.263 ************************************ 00:25:22.263 END TEST nvmf_digest 00:25:22.263 ************************************ 00:25:22.263 00:25:22.263 real 0m48.535s 00:25:22.263 user 1m31.452s 00:25:22.263 sys 0m9.634s 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:22.263 21:24:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:22.263 21:24:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:22.263 21:24:33 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:22.263 21:24:33 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:25:22.263 21:24:33 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:22.263 21:24:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:22.263 21:24:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:22.263 21:24:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:22.263 ************************************ 00:25:22.263 START TEST nvmf_host_multipath 00:25:22.263 ************************************ 00:25:22.263 21:24:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:22.519 * Looking for test storage... 00:25:22.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.519 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:22.520 Cannot find device "nvmf_tgt_br" 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:22.520 Cannot find device "nvmf_tgt_br2" 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:22.520 Cannot find device "nvmf_tgt_br" 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:22.520 Cannot find device "nvmf_tgt_br2" 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:22.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:22.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:22.520 21:24:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:22.520 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:22.778 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:22.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:25:22.779 00:25:22.779 --- 10.0.0.2 ping statistics --- 00:25:22.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.779 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:22.779 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:22.779 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:25:22.779 00:25:22.779 --- 10.0.0.3 ping statistics --- 00:25:22.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.779 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:22.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:25:22.779 00:25:22.779 --- 10.0.0.1 ping statistics --- 00:25:22.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.779 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=86999 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 86999 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 86999 ']' 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.779 21:24:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:22.779 [2024-07-14 21:24:34.322769] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:22.779 [2024-07-14 21:24:34.323440] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.037 [2024-07-14 21:24:34.494248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:23.294 [2024-07-14 21:24:34.696395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.295 [2024-07-14 21:24:34.696509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.295 [2024-07-14 21:24:34.696534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.295 [2024-07-14 21:24:34.696548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.295 [2024-07-14 21:24:34.696559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.295 [2024-07-14 21:24:34.696952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.295 [2024-07-14 21:24:34.696964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.553 [2024-07-14 21:24:34.900266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86999 00:25:23.811 21:24:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:24.069 [2024-07-14 21:24:35.542810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.069 21:24:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:24.636 Malloc0 00:25:24.636 21:24:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:24.636 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:24.894 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.152 [2024-07-14 21:24:36.633123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.152 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:25.410 [2024-07-14 21:24:36.869263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=87055 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 87055 /var/tmp/bdevperf.sock 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 87055 ']' 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.410 21:24:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:26.367 21:24:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.367 21:24:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:25:26.367 21:24:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:26.624 21:24:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:27.189 Nvme0n1 00:25:27.189 21:24:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:27.447 Nvme0n1 00:25:27.447 21:24:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:27.447 21:24:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:28.379 21:24:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:28.379 21:24:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.636 21:24:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:28.895 21:24:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:28.895 21:24:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:28.895 21:24:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87095 00:25:28.895 21:24:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:35.452 Attaching 4 probes... 00:25:35.452 @path[10.0.0.2, 4421]: 12976 00:25:35.452 @path[10.0.0.2, 4421]: 13319 00:25:35.452 @path[10.0.0.2, 4421]: 13294 00:25:35.452 @path[10.0.0.2, 4421]: 13279 00:25:35.452 @path[10.0.0.2, 4421]: 13238 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87095 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.452 21:24:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:35.711 21:24:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:35.711 21:24:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:35.711 21:24:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87212 00:25:35.711 21:24:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:42.274 Attaching 4 probes... 00:25:42.274 @path[10.0.0.2, 4420]: 13048 00:25:42.274 @path[10.0.0.2, 4420]: 13192 00:25:42.274 @path[10.0.0.2, 4420]: 13312 00:25:42.274 @path[10.0.0.2, 4420]: 13298 00:25:42.274 @path[10.0.0.2, 4420]: 13366 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87212 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:42.274 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:42.531 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:42.531 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87320 00:25:42.531 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:42.531 21:24:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:49.090 21:24:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:49.090 21:24:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:49.090 Attaching 4 probes... 00:25:49.090 @path[10.0.0.2, 4421]: 10334 00:25:49.090 @path[10.0.0.2, 4421]: 13084 00:25:49.090 @path[10.0.0.2, 4421]: 13074 00:25:49.090 @path[10.0.0.2, 4421]: 13004 00:25:49.090 @path[10.0.0.2, 4421]: 12996 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87320 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:49.090 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:49.349 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:49.349 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:49.349 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87433 00:25:49.349 21:25:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:55.908 21:25:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:55.908 21:25:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:55.908 Attaching 4 probes... 00:25:55.908 00:25:55.908 00:25:55.908 00:25:55.908 00:25:55.908 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87433 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:55.908 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.167 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:56.167 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87546 00:25:56.167 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:56.167 21:25:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:02.734 Attaching 4 probes... 00:26:02.734 @path[10.0.0.2, 4421]: 14918 00:26:02.734 @path[10.0.0.2, 4421]: 15335 00:26:02.734 @path[10.0.0.2, 4421]: 15336 00:26:02.734 @path[10.0.0.2, 4421]: 15245 00:26:02.734 @path[10.0.0.2, 4421]: 15095 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87546 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:02.734 21:25:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:02.734 21:25:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:04.109 21:25:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:04.109 21:25:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87664 00:26:04.109 21:25:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:04.109 21:25:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:10.755 Attaching 4 probes... 00:26:10.755 @path[10.0.0.2, 4420]: 14085 00:26:10.755 @path[10.0.0.2, 4420]: 12874 00:26:10.755 @path[10.0.0.2, 4420]: 12706 00:26:10.755 @path[10.0.0.2, 4420]: 12784 00:26:10.755 @path[10.0.0.2, 4420]: 12680 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87664 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:10.755 [2024-07-14 21:25:21.728839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:10.755 21:25:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:10.755 21:25:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:17.314 21:25:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:17.314 21:25:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87839 00:26:17.314 21:25:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86999 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:17.314 21:25:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:22.595 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:22.595 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:22.853 Attaching 4 probes... 00:26:22.853 @path[10.0.0.2, 4421]: 12597 00:26:22.853 @path[10.0.0.2, 4421]: 12737 00:26:22.853 @path[10.0.0.2, 4421]: 12749 00:26:22.853 @path[10.0.0.2, 4421]: 12701 00:26:22.853 @path[10.0.0.2, 4421]: 12729 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87839 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 87055 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 87055 ']' 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 87055 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87055 00:26:22.853 killing process with pid 87055 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87055' 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 87055 00:26:22.853 21:25:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 87055 00:26:23.797 Connection closed with partial response: 00:26:23.797 00:26:23.797 00:26:24.063 21:25:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 87055 00:26:24.063 21:25:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:24.063 [2024-07-14 21:24:36.973375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:24.063 [2024-07-14 21:24:36.973548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87055 ] 00:26:24.063 [2024-07-14 21:24:37.137552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.063 [2024-07-14 21:24:37.337949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.063 [2024-07-14 21:24:37.530096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:24.063 Running I/O for 90 seconds... 00:26:24.063 [2024-07-14 21:24:47.117091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.117939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.117987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.118010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.118062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.118114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.118166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.063 [2024-07-14 21:24:47.118218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.063 [2024-07-14 21:24:47.118700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.063 [2024-07-14 21:24:47.118723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.118770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.118796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.118829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.118852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.118884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.118907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.118938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.118961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.118992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.119578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.119972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.119996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.064 [2024-07-14 21:24:47.120501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.120956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.120987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.121010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.121041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.121064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.121094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.121117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.064 [2024-07-14 21:24:47.121147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.064 [2024-07-14 21:24:47.121170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.121953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.121984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.122841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.122905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.122958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.122989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.065 [2024-07-14 21:24:47.123492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.065 [2024-07-14 21:24:47.123532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.065 [2024-07-14 21:24:47.123556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.123587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.123626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.123658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.123681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.123712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.123735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.123781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.123805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.123837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.123859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.123890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.123913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.125791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:47.125835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.125881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.125907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.125939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.125963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.125994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:47.126530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:47.126553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.657855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.657954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.658941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.658963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.066 [2024-07-14 21:24:53.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.066 [2024-07-14 21:24:53.659427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.066 [2024-07-14 21:24:53.659448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.659963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.659986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.660040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.067 [2024-07-14 21:24:53.660722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.660798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.660872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.660928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.660959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.067 [2024-07-14 21:24:53.660982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.067 [2024-07-14 21:24:53.661013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.661037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.661090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.661173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.661224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.661953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.661993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.662047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.662946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.662992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.663023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.068 [2024-07-14 21:24:53.663080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.068 [2024-07-14 21:24:53.663668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.068 [2024-07-14 21:24:53.663698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.663728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.663760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.663782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.663826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.663852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.663884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.663907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.663937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.663959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.663990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.664459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.664877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.664900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.665908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.665947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.665997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:24:53.666550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.666615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.666679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.666742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.666826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.666891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.666954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.666994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.667017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:24:53.667058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.069 [2024-07-14 21:24:53.667091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:25:00.865198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:25:00.865301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:25:00.865387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:25:00.865417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:25:00.865450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:25:00.865473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:25:00.865503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:25:00.865541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.069 [2024-07-14 21:25:00.865573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.069 [2024-07-14 21:25:00.865595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.865646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.865699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.865750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.865822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.865888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.865938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.865968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.866722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.866791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.866930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.866974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.070 [2024-07-14 21:25:00.867749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.867925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.867957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.868008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.868037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.868058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.868086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.868128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.070 [2024-07-14 21:25:00.868158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.070 [2024-07-14 21:25:00.868195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.868950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.868980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.869011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.869095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.869145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.869196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.869958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.870011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.870064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.870117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.071 [2024-07-14 21:25:00.870171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.870240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.870305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.071 [2024-07-14 21:25:00.870355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.071 [2024-07-14 21:25:00.870384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.870936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.870959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.871661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.871947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.871977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.872040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.872072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.072 [2024-07-14 21:25:00.873218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.873934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.873958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.874012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.874037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.874077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.874100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.874200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.072 [2024-07-14 21:25:00.874224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.072 [2024-07-14 21:25:00.874264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:00.874287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:00.874326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:00.874350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:00.874390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:00.874414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.193976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.193994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.073 [2024-07-14 21:25:14.194779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.194977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.194996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.195017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.195035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.195055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.195074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.195094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.073 [2024-07-14 21:25:14.195113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.073 [2024-07-14 21:25:14.195132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.195675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.195972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.195992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.074 [2024-07-14 21:25:14.196889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.196965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.196991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.197013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.074 [2024-07-14 21:25:14.197032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.074 [2024-07-14 21:25:14.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.075 [2024-07-14 21:25:14.197878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.197916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.197954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.197974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.197992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.198039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.198076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.198130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.075 [2024-07-14 21:25:14.198169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:26:24.075 [2024-07-14 21:25:14.198229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.075 [2024-07-14 21:25:14.198245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.075 [2024-07-14 21:25:14.198261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55440 len:8 PRP1 0x0 PRP2 0x0 00:26:24.075 [2024-07-14 21:25:14.198280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.075 [2024-07-14 21:25:14.198314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.075 [2024-07-14 21:25:14.198328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56024 len:8 PRP1 0x0 PRP2 0x0 00:26:24.075 [2024-07-14 21:25:14.198345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.075 [2024-07-14 21:25:14.198378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.075 [2024-07-14 21:25:14.198393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56032 len:8 PRP1 0x0 PRP2 0x0 00:26:24.075 [2024-07-14 21:25:14.198410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.075 [2024-07-14 21:25:14.198441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.075 [2024-07-14 21:25:14.198455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56040 len:8 PRP1 0x0 PRP2 0x0 00:26:24.075 [2024-07-14 21:25:14.198471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.075 [2024-07-14 21:25:14.198500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.075 [2024-07-14 21:25:14.198514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56048 len:8 PRP1 0x0 PRP2 0x0 00:26:24.075 [2024-07-14 21:25:14.198531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.075 [2024-07-14 21:25:14.198557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.075 [2024-07-14 21:25:14.198571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.075 [2024-07-14 21:25:14.198585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56056 len:8 PRP1 0x0 PRP2 0x0 00:26:24.075 [2024-07-14 21:25:14.198602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.198618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.198632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.198645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56064 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.198662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.198678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.198691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.198705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56072 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.198722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.198738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.198766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.198784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56080 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.198802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.198819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.198833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.198847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55448 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.198864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.198888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.198905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.198920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55456 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.198936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.198953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.198966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.198980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55464 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.198996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.199026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.199039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55472 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.199064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.199095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.199109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55480 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.199125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.199155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.199169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55488 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.199185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.199215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.199229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55496 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.199245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.076 [2024-07-14 21:25:14.199292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.076 [2024-07-14 21:25:14.199306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55504 len:8 PRP1 0x0 PRP2 0x0 00:26:24.076 [2024-07-14 21:25:14.199323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199642] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:26:24.076 [2024-07-14 21:25:14.199828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.076 [2024-07-14 21:25:14.199863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.076 [2024-07-14 21:25:14.199924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.076 [2024-07-14 21:25:14.199963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.199981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.076 [2024-07-14 21:25:14.199999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.200018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.076 [2024-07-14 21:25:14.200037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.076 [2024-07-14 21:25:14.200076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:26:24.076 [2024-07-14 21:25:14.201526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.076 [2024-07-14 21:25:14.201597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:24.076 [2024-07-14 21:25:14.202186] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.076 [2024-07-14 21:25:14.202233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:26:24.076 [2024-07-14 21:25:14.202261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:26:24.076 [2024-07-14 21:25:14.202350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:24.076 [2024-07-14 21:25:14.202401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.076 [2024-07-14 21:25:14.202446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.076 [2024-07-14 21:25:14.202469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.076 [2024-07-14 21:25:14.202523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.076 [2024-07-14 21:25:14.202550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.076 [2024-07-14 21:25:24.298495] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:24.076 Received shutdown signal, test time was about 55.493099 seconds 00:26:24.076 00:26:24.076 Latency(us) 00:26:24.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.076 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:24.076 Verification LBA range: start 0x0 length 0x4000 00:26:24.076 Nvme0n1 : 55.49 5732.79 22.39 0.00 0.00 22295.16 1608.61 7046430.72 00:26:24.076 =================================================================================================================== 00:26:24.076 Total : 5732.79 22.39 0.00 0.00 22295.16 1608.61 7046430.72 00:26:24.076 21:25:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.335 rmmod nvme_tcp 00:26:24.335 rmmod nvme_fabrics 00:26:24.335 rmmod nvme_keyring 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 86999 ']' 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 86999 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 86999 ']' 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 86999 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:24.335 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86999 00:26:24.593 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:24.593 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:24.593 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86999' 00:26:24.593 killing process with pid 86999 00:26:24.593 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 86999 00:26:24.593 21:25:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 86999 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:26.024 00:26:26.024 real 1m3.633s 00:26:26.024 user 2m56.679s 00:26:26.024 sys 0m16.824s 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:26.024 21:25:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:26.024 ************************************ 00:26:26.024 END TEST nvmf_host_multipath 00:26:26.024 ************************************ 00:26:26.024 21:25:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:26.024 21:25:37 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:26.024 21:25:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:26.024 21:25:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.024 21:25:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:26.024 ************************************ 00:26:26.024 START TEST nvmf_timeout 00:26:26.024 ************************************ 00:26:26.024 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:26.024 * Looking for test storage... 00:26:26.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:26.025 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:26.283 Cannot find device "nvmf_tgt_br" 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:26.283 Cannot find device "nvmf_tgt_br2" 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:26.283 Cannot find device "nvmf_tgt_br" 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:26.283 Cannot find device "nvmf_tgt_br2" 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:26.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:26.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:26.283 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:26.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:26:26.542 00:26:26.542 --- 10.0.0.2 ping statistics --- 00:26:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.542 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:26.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:26.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:26:26.542 00:26:26.542 --- 10.0.0.3 ping statistics --- 00:26:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.542 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:26.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:26.542 00:26:26.542 --- 10.0.0.1 ping statistics --- 00:26:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.542 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=88167 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 88167 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88167 ']' 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.542 21:25:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.542 [2024-07-14 21:25:38.078960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:26.542 [2024-07-14 21:25:38.079157] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.801 [2024-07-14 21:25:38.263307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:27.061 [2024-07-14 21:25:38.519104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.061 [2024-07-14 21:25:38.519182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.061 [2024-07-14 21:25:38.519223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.061 [2024-07-14 21:25:38.519267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.061 [2024-07-14 21:25:38.519294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.061 [2024-07-14 21:25:38.519672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.061 [2024-07-14 21:25:38.519767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.320 [2024-07-14 21:25:38.741522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:27.578 21:25:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.578 21:25:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:27.578 21:25:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:27.578 21:25:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.579 21:25:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.579 21:25:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.579 21:25:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:27.579 21:25:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:27.838 [2024-07-14 21:25:39.342126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.838 21:25:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:28.405 Malloc0 00:26:28.405 21:25:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.663 21:25:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:28.922 21:25:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.181 [2024-07-14 21:25:40.481345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88222 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88222 /var/tmp/bdevperf.sock 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88222 ']' 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:29.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.181 21:25:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.181 [2024-07-14 21:25:40.609716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:29.181 [2024-07-14 21:25:40.609957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88222 ] 00:26:29.440 [2024-07-14 21:25:40.787819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.699 [2024-07-14 21:25:41.022238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.699 [2024-07-14 21:25:41.218481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:30.265 21:25:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.265 21:25:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:30.265 21:25:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:30.265 21:25:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:30.830 NVMe0n1 00:26:30.830 21:25:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88245 00:26:30.830 21:25:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:30.830 21:25:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:30.830 Running I/O for 10 seconds... 00:26:31.764 21:25:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.023 [2024-07-14 21:25:43.349017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.023 [2024-07-14 21:25:43.349102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.023 [2024-07-14 21:25:43.349168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.023 [2024-07-14 21:25:43.349201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.023 [2024-07-14 21:25:43.349233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:32.023 [2024-07-14 21:25:43.349611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.023 [2024-07-14 21:25:43.349642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.349980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.349997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.023 [2024-07-14 21:25:43.350813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.023 [2024-07-14 21:25:43.350830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.350848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.350865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.350882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.350899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.350916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.350933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.350952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.350969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.350986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.351969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.351986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.024 [2024-07-14 21:25:43.352322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.024 [2024-07-14 21:25:43.352338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.352972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.352991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.025 [2024-07-14 21:25:43.353653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.025 [2024-07-14 21:25:43.353914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.025 [2024-07-14 21:25:43.353930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.353947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.353964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.353981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.353998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.354015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.354049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.354084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.354118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.354154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.026 [2024-07-14 21:25:43.354190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.026 [2024-07-14 21:25:43.354226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:26:32.026 [2024-07-14 21:25:43.354264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:32.026 [2024-07-14 21:25:43.354277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:32.026 [2024-07-14 21:25:43.354294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48280 len:8 PRP1 0x0 PRP2 0x0 00:26:32.026 [2024-07-14 21:25:43.354309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.026 [2024-07-14 21:25:43.354567] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:26:32.026 [2024-07-14 21:25:43.354873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.026 [2024-07-14 21:25:43.354917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:32.026 [2024-07-14 21:25:43.355059] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.026 [2024-07-14 21:25:43.355094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:32.026 [2024-07-14 21:25:43.355128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:32.026 [2024-07-14 21:25:43.355164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:32.026 [2024-07-14 21:25:43.355200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.026 [2024-07-14 21:25:43.355221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.026 [2024-07-14 21:25:43.355240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.026 [2024-07-14 21:25:43.355272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.026 [2024-07-14 21:25:43.355292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.026 21:25:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:33.924 [2024-07-14 21:25:45.355547] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.924 [2024-07-14 21:25:45.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:33.924 [2024-07-14 21:25:45.355747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:33.924 [2024-07-14 21:25:45.355801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:33.924 [2024-07-14 21:25:45.355840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.924 [2024-07-14 21:25:45.355856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.924 [2024-07-14 21:25:45.355875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.924 [2024-07-14 21:25:45.355929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.924 [2024-07-14 21:25:45.355969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.924 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:33.924 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:33.924 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:34.182 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:34.182 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:34.182 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:34.182 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:34.441 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:34.441 21:25:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:35.816 [2024-07-14 21:25:47.356173] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.816 [2024-07-14 21:25:47.356261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:35.816 [2024-07-14 21:25:47.356292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:35.816 [2024-07-14 21:25:47.356331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:35.816 [2024-07-14 21:25:47.356379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.816 [2024-07-14 21:25:47.356411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.816 [2024-07-14 21:25:47.356430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.816 [2024-07-14 21:25:47.356481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.816 [2024-07-14 21:25:47.356533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:38.349 [2024-07-14 21:25:49.356633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:38.349 [2024-07-14 21:25:49.356717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:38.349 [2024-07-14 21:25:49.356738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:38.349 [2024-07-14 21:25:49.356769] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:38.349 [2024-07-14 21:25:49.356818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:38.916 00:26:38.916 Latency(us) 00:26:38.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.916 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:38.916 Verification LBA range: start 0x0 length 0x4000 00:26:38.916 NVMe0n1 : 8.14 725.94 2.84 15.73 0.00 172283.67 5421.61 7015926.69 00:26:38.916 =================================================================================================================== 00:26:38.916 Total : 725.94 2.84 15.73 0.00 172283.67 5421.61 7015926.69 00:26:38.916 0 00:26:39.483 21:25:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:39.483 21:25:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:39.483 21:25:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:39.741 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:39.741 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:39.742 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:39.742 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 88245 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88222 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88222 ']' 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88222 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88222 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:40.000 killing process with pid 88222 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88222' 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88222 00:26:40.000 Received shutdown signal, test time was about 9.307104 seconds 00:26:40.000 00:26:40.000 Latency(us) 00:26:40.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.000 =================================================================================================================== 00:26:40.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.000 21:25:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88222 00:26:41.380 21:25:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.643 [2024-07-14 21:25:53.007189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88376 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88376 /var/tmp/bdevperf.sock 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88376 ']' 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.643 21:25:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.643 [2024-07-14 21:25:53.132412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:41.643 [2024-07-14 21:25:53.132613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88376 ] 00:26:41.902 [2024-07-14 21:25:53.305185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.159 [2024-07-14 21:25:53.508004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.159 [2024-07-14 21:25:53.703437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:42.726 21:25:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.726 21:25:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:42.726 21:25:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:42.984 21:25:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:43.243 NVMe0n1 00:26:43.243 21:25:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88401 00:26:43.243 21:25:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.243 21:25:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:43.243 Running I/O for 10 seconds... 00:26:44.178 21:25:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.439 [2024-07-14 21:25:55.908903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.908974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.439 [2024-07-14 21:25:55.909629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.439 [2024-07-14 21:25:55.909644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.909662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.909677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.909696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.909729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.909743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.910969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.910989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.911004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.911023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.913971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.913985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.440 [2024-07-14 21:25:55.914208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.440 [2024-07-14 21:25:55.914235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.914981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.441 [2024-07-14 21:25:55.914996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-07-14 21:25:55.915252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-07-14 21:25:55.915267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-07-14 21:25:55.915562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-07-14 21:25:55.915598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-07-14 21:25:55.915887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.915974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.442 [2024-07-14 21:25:55.916294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-07-14 21:25:55.916313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.916971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.916990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.443 [2024-07-14 21:25:55.917005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.917025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:26:44.443 [2024-07-14 21:25:55.917046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-07-14 21:25:55.917066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-07-14 21:25:55.917080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48776 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-07-14 21:25:55.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.917371] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:26:44.443 [2024-07-14 21:25:55.917524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.443 [2024-07-14 21:25:55.917562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.917582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.443 [2024-07-14 21:25:55.917598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.917614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.443 [2024-07-14 21:25:55.917629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.917644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.443 [2024-07-14 21:25:55.917659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-07-14 21:25:55.917673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:44.443 [2024-07-14 21:25:55.917950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.443 [2024-07-14 21:25:55.917991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:44.443 [2024-07-14 21:25:55.918135] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.443 [2024-07-14 21:25:55.918171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:44.443 [2024-07-14 21:25:55.918189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:44.443 [2024-07-14 21:25:55.918222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:44.443 [2024-07-14 21:25:55.918250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.443 [2024-07-14 21:25:55.918274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.443 [2024-07-14 21:25:55.918290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.444 [2024-07-14 21:25:55.918325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.444 [2024-07-14 21:25:55.918345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.444 21:25:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:45.380 [2024-07-14 21:25:56.918512] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-07-14 21:25:56.918588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:45.380 [2024-07-14 21:25:56.918614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:45.380 [2024-07-14 21:25:56.918659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:45.380 [2024-07-14 21:25:56.918690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.380 [2024-07-14 21:25:56.918709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.380 [2024-07-14 21:25:56.918732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.380 [2024-07-14 21:25:56.918795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.380 [2024-07-14 21:25:56.918818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.638 21:25:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.897 [2024-07-14 21:25:57.191699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.897 21:25:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 88401 00:26:46.464 [2024-07-14 21:25:57.938040] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.581 00:26:54.581 Latency(us) 00:26:54.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.581 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:54.581 Verification LBA range: start 0x0 length 0x4000 00:26:54.581 NVMe0n1 : 10.01 5468.54 21.36 0.00 0.00 23368.11 1586.27 3035150.89 00:26:54.581 =================================================================================================================== 00:26:54.581 Total : 5468.54 21.36 0.00 0.00 23368.11 1586.27 3035150.89 00:26:54.581 0 00:26:54.581 21:26:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88509 00:26:54.581 21:26:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:54.581 21:26:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:54.581 Running I/O for 10 seconds... 00:26:54.581 21:26:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.581 [2024-07-14 21:26:06.060236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.581 [2024-07-14 21:26:06.060339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.581 [2024-07-14 21:26:06.060393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.581 [2024-07-14 21:26:06.060423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.581 [2024-07-14 21:26:06.060440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.581 [2024-07-14 21:26:06.060454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.581 [2024-07-14 21:26:06.060469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.581 [2024-07-14 21:26:06.060483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.581 [2024-07-14 21:26:06.060528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.581 [2024-07-14 21:26:06.060544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.581 [2024-07-14 21:26:06.060561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.581 [2024-07-14 21:26:06.060575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.581 [2024-07-14 21:26:06.060591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.581 [2024-07-14 21:26:06.060605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.582 [2024-07-14 21:26:06.060635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.582 [2024-07-14 21:26:06.060665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.582 [2024-07-14 21:26:06.060696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:26:54.582 [2024-07-14 21:26:06.060732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.060744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.060758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51360 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.060787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.060817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.060830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51488 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.060843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.060867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.060879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51496 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.060891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.060916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.060943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51504 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.060976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.060987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.060998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51512 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51520 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51528 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51536 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51544 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51552 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51560 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51568 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51576 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51584 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51592 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51600 len:8 PRP1 0x0 PRP2 0x0 00:26:54.582 [2024-07-14 21:26:06.061542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.582 [2024-07-14 21:26:06.061555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.582 [2024-07-14 21:26:06.061565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.582 [2024-07-14 21:26:06.061577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51608 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51616 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51624 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51632 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51640 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51648 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51656 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51664 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.061956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.061967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51672 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.061980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.061993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51680 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51688 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51696 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51704 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51712 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51720 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51728 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51736 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51744 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51752 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.583 [2024-07-14 21:26:06.062468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.583 [2024-07-14 21:26:06.062479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.583 [2024-07-14 21:26:06.062490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51760 len:8 PRP1 0x0 PRP2 0x0 00:26:54.583 [2024-07-14 21:26:06.062504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51768 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51776 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51784 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51792 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51800 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51808 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51816 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51824 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51832 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.062964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.062977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.062988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.062999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51840 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51848 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51856 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51864 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51872 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51880 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51888 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51896 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51904 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51912 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51920 len:8 PRP1 0x0 PRP2 0x0 00:26:54.584 [2024-07-14 21:26:06.063487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.584 [2024-07-14 21:26:06.063500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.584 [2024-07-14 21:26:06.063511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.584 [2024-07-14 21:26:06.063522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51928 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51936 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51944 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51952 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51960 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51968 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51976 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51984 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51992 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.063960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.063971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.063982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52000 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.063995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52008 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52016 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52024 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52032 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52040 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52048 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52056 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.064383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52064 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.064407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.585 [2024-07-14 21:26:06.064420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.585 [2024-07-14 21:26:06.075250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.585 [2024-07-14 21:26:06.075332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52072 len:8 PRP1 0x0 PRP2 0x0 00:26:54.585 [2024-07-14 21:26:06.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52080 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52088 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52096 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52104 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52112 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52120 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52128 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.075931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.075947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52136 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.075965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.075984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52144 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52152 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52160 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52168 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52176 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52184 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52192 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52200 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52208 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52216 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.586 [2024-07-14 21:26:06.076694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.586 [2024-07-14 21:26:06.076709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.586 [2024-07-14 21:26:06.076725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52224 len:8 PRP1 0x0 PRP2 0x0 00:26:54.586 [2024-07-14 21:26:06.076743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.076781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.076799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.076815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52232 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.076844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.076862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.076878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.076894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52240 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.076911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.076929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.076944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.076960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52248 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.076978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.076996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52256 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52264 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52272 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52280 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52288 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52296 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52304 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52312 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52320 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52328 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52336 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52344 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52352 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.077944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.077959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.588 [2024-07-14 21:26:06.077975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52360 len:8 PRP1 0x0 PRP2 0x0 00:26:54.588 [2024-07-14 21:26:06.077993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.588 [2024-07-14 21:26:06.078011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.588 [2024-07-14 21:26:06.078026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51368 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.589 [2024-07-14 21:26:06.078094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51376 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.589 [2024-07-14 21:26:06.078160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51384 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.589 [2024-07-14 21:26:06.078227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51392 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.589 [2024-07-14 21:26:06.078294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51400 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.589 [2024-07-14 21:26:06.078362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51408 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.589 [2024-07-14 21:26:06.078429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.589 [2024-07-14 21:26:06.078445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51416 len:8 PRP1 0x0 PRP2 0x0 00:26:54.589 [2024-07-14 21:26:06.078463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.078846] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:26:54.589 [2024-07-14 21:26:06.079058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.589 [2024-07-14 21:26:06.079099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.079126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.589 [2024-07-14 21:26:06.079145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.079165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.589 [2024-07-14 21:26:06.079183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.079202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.589 [2024-07-14 21:26:06.079221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.589 [2024-07-14 21:26:06.079239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:54.589 [2024-07-14 21:26:06.079632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.589 [2024-07-14 21:26:06.079709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:54.589 [2024-07-14 21:26:06.079908] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.589 [2024-07-14 21:26:06.079951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:54.589 [2024-07-14 21:26:06.079974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:54.589 [2024-07-14 21:26:06.080012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:54.589 [2024-07-14 21:26:06.080046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.589 [2024-07-14 21:26:06.080076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.589 [2024-07-14 21:26:06.080096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.589 [2024-07-14 21:26:06.080138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.589 [2024-07-14 21:26:06.080162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.589 21:26:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:55.964 [2024-07-14 21:26:07.080327] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.964 [2024-07-14 21:26:07.080433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:55.964 [2024-07-14 21:26:07.080484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:55.964 [2024-07-14 21:26:07.080564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:55.964 [2024-07-14 21:26:07.080595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.964 [2024-07-14 21:26:07.080610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.964 [2024-07-14 21:26:07.080625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.964 [2024-07-14 21:26:07.080664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.964 [2024-07-14 21:26:07.080682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:56.897 [2024-07-14 21:26:08.080892] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.897 [2024-07-14 21:26:08.080979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:56.897 [2024-07-14 21:26:08.081003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:56.897 [2024-07-14 21:26:08.081042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:56.897 [2024-07-14 21:26:08.081072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.897 [2024-07-14 21:26:08.081088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.897 [2024-07-14 21:26:08.081104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.897 [2024-07-14 21:26:08.081142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.897 [2024-07-14 21:26:08.081161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.832 [2024-07-14 21:26:09.085010] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.832 [2024-07-14 21:26:09.085083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:57.832 [2024-07-14 21:26:09.085106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:57.832 [2024-07-14 21:26:09.085425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:57.832 [2024-07-14 21:26:09.085703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.832 [2024-07-14 21:26:09.085733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.832 [2024-07-14 21:26:09.085764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.832 [2024-07-14 21:26:09.090252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.832 [2024-07-14 21:26:09.090313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.832 21:26:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.832 [2024-07-14 21:26:09.347232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.832 21:26:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 88509 00:26:58.768 [2024-07-14 21:26:10.137033] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:04.057 00:27:04.057 Latency(us) 00:27:04.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.057 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:04.057 Verification LBA range: start 0x0 length 0x4000 00:27:04.057 NVMe0n1 : 10.01 4085.04 15.96 3242.50 0.00 17433.70 781.96 3035150.89 00:27:04.057 =================================================================================================================== 00:27:04.057 Total : 4085.04 15.96 3242.50 0.00 17433.70 0.00 3035150.89 00:27:04.057 0 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88376 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88376 ']' 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88376 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88376 00:27:04.057 killing process with pid 88376 00:27:04.057 Received shutdown signal, test time was about 10.000000 seconds 00:27:04.057 00:27:04.057 Latency(us) 00:27:04.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.057 =================================================================================================================== 00:27:04.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88376' 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88376 00:27:04.057 21:26:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88376 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88626 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88626 /var/tmp/bdevperf.sock 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88626 ']' 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.625 21:26:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.896 [2024-07-14 21:26:16.187513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:04.896 [2024-07-14 21:26:16.187700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88626 ] 00:27:04.896 [2024-07-14 21:26:16.363097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.157 [2024-07-14 21:26:16.558534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.416 [2024-07-14 21:26:16.749965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:05.673 21:26:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.673 21:26:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:05.673 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88642 00:27:05.673 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:05.673 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88626 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:05.932 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:06.499 NVMe0n1 00:27:06.499 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88678 00:27:06.499 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:06.499 21:26:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:06.499 Running I/O for 10 seconds... 00:27:07.434 21:26:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.697 [2024-07-14 21:26:19.063414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.697 [2024-07-14 21:26:19.063480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.063544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.697 [2024-07-14 21:26:19.063563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.063580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.697 [2024-07-14 21:26:19.063592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.063610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.697 [2024-07-14 21:26:19.063623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.063637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:27:07.697 [2024-07-14 21:26:19.063983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-07-14 21:26:19.064920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-07-14 21:26:19.064936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.064952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.064968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.064983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.065970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.065986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-07-14 21:26:19.066388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-07-14 21:26:19.066405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.066971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.066987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-07-14 21:26:19.067852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-07-14 21:26:19.067867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.067883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.067902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.067918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.067934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.067950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.067966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.067982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-07-14 21:26:19.068371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:27:07.700 [2024-07-14 21:26:19.068407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:07.700 [2024-07-14 21:26:19.068420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:07.700 [2024-07-14 21:26:19.068440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107520 len:8 PRP1 0x0 PRP2 0x0 00:27:07.700 [2024-07-14 21:26:19.068455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-07-14 21:26:19.068724] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:27:07.700 [2024-07-14 21:26:19.069099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.700 [2024-07-14 21:26:19.069152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:07.700 [2024-07-14 21:26:19.069308] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-14 21:26:19.069343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:27:07.700 [2024-07-14 21:26:19.069363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:27:07.700 [2024-07-14 21:26:19.069394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:07.700 [2024-07-14 21:26:19.069425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.700 [2024-07-14 21:26:19.069443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.700 [2024-07-14 21:26:19.069468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.700 [2024-07-14 21:26:19.069505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.700 [2024-07-14 21:26:19.069526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.700 21:26:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 88678 00:27:09.605 [2024-07-14 21:26:21.069761] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.605 [2024-07-14 21:26:21.069847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:27:09.605 [2024-07-14 21:26:21.069878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:27:09.605 [2024-07-14 21:26:21.069918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:09.605 [2024-07-14 21:26:21.069952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:09.605 [2024-07-14 21:26:21.069968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:09.605 [2024-07-14 21:26:21.069987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.605 [2024-07-14 21:26:21.070029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:09.605 [2024-07-14 21:26:21.070050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.138 [2024-07-14 21:26:23.070299] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.138 [2024-07-14 21:26:23.070378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:27:12.139 [2024-07-14 21:26:23.070406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:27:12.139 [2024-07-14 21:26:23.070474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:12.139 [2024-07-14 21:26:23.070506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.139 [2024-07-14 21:26:23.070521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.139 [2024-07-14 21:26:23.070543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.139 [2024-07-14 21:26:23.070600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.139 [2024-07-14 21:26:23.070621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.041 [2024-07-14 21:26:25.070718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.041 [2024-07-14 21:26:25.070797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.041 [2024-07-14 21:26:25.070825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.041 [2024-07-14 21:26:25.070843] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:14.041 [2024-07-14 21:26:25.070887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.610 00:27:14.610 Latency(us) 00:27:14.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.610 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:14.610 NVMe0n1 : 8.15 1572.91 6.14 15.70 0.00 80479.71 10724.07 7046430.72 00:27:14.610 =================================================================================================================== 00:27:14.610 Total : 1572.91 6.14 15.70 0.00 80479.71 10724.07 7046430.72 00:27:14.610 0 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:14.610 Attaching 5 probes... 00:27:14.610 1350.005005: reset bdev controller NVMe0 00:27:14.610 1350.128185: reconnect bdev controller NVMe0 00:27:14.610 3350.494914: reconnect delay bdev controller NVMe0 00:27:14.610 3350.521026: reconnect bdev controller NVMe0 00:27:14.610 5351.023407: reconnect delay bdev controller NVMe0 00:27:14.610 5351.078329: reconnect bdev controller NVMe0 00:27:14.610 7351.585486: reconnect delay bdev controller NVMe0 00:27:14.610 7351.627269: reconnect bdev controller NVMe0 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 88642 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88626 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88626 ']' 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88626 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88626 00:27:14.610 killing process with pid 88626 00:27:14.610 Received shutdown signal, test time was about 8.212145 seconds 00:27:14.610 00:27:14.610 Latency(us) 00:27:14.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.610 =================================================================================================================== 00:27:14.610 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88626' 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88626 00:27:14.610 21:26:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88626 00:27:15.986 21:26:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.286 rmmod nvme_tcp 00:27:16.286 rmmod nvme_fabrics 00:27:16.286 rmmod nvme_keyring 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 88167 ']' 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 88167 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88167 ']' 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88167 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88167 00:27:16.286 killing process with pid 88167 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88167' 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88167 00:27:16.286 21:26:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88167 00:27:17.665 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.665 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.665 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.665 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.665 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.665 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.666 21:26:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.666 21:26:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.666 21:26:29 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:17.666 00:27:17.666 real 0m51.706s 00:27:17.666 user 2m30.335s 00:27:17.666 sys 0m5.618s 00:27:17.666 21:26:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.666 ************************************ 00:27:17.666 END TEST nvmf_timeout 00:27:17.666 ************************************ 00:27:17.666 21:26:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.666 21:26:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:17.666 21:26:29 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:27:17.666 21:26:29 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:17.666 21:26:29 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.666 21:26:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.925 21:26:29 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:17.925 00:27:17.925 real 16m19.002s 00:27:17.925 user 42m40.501s 00:27:17.925 sys 4m3.518s 00:27:17.925 ************************************ 00:27:17.925 END TEST nvmf_tcp 00:27:17.925 ************************************ 00:27:17.925 21:26:29 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.925 21:26:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.925 21:26:29 -- common/autotest_common.sh@1142 -- # return 0 00:27:17.925 21:26:29 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:27:17.925 21:26:29 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:17.925 21:26:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:17.925 21:26:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.925 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:27:17.925 ************************************ 00:27:17.925 START TEST nvmf_dif 00:27:17.925 ************************************ 00:27:17.925 21:26:29 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:17.925 * Looking for test storage... 00:27:17.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:17.925 21:26:29 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.925 21:26:29 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.925 21:26:29 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.925 21:26:29 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.925 21:26:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.925 21:26:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.925 21:26:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.925 21:26:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:17.925 21:26:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.925 21:26:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:17.925 21:26:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:17.925 21:26:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:17.925 21:26:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:17.925 21:26:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.925 21:26:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:17.925 21:26:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:17.925 Cannot find device "nvmf_tgt_br" 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@155 -- # true 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:17.925 Cannot find device "nvmf_tgt_br2" 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@156 -- # true 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:17.925 Cannot find device "nvmf_tgt_br" 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@158 -- # true 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:17.925 Cannot find device "nvmf_tgt_br2" 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@159 -- # true 00:27:17.925 21:26:29 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.184 21:26:29 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:18.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:27:18.444 00:27:18.444 --- 10.0.0.2 ping statistics --- 00:27:18.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.444 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:18.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:27:18.444 00:27:18.444 --- 10.0.0.3 ping statistics --- 00:27:18.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.444 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:27:18.444 00:27:18.444 --- 10.0.0.1 ping statistics --- 00:27:18.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.444 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:18.444 21:26:29 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:18.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:18.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.702 21:26:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:18.702 21:26:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=89135 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 89135 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 89135 ']' 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.702 21:26:30 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.702 21:26:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.961 [2024-07-14 21:26:30.304838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:18.961 [2024-07-14 21:26:30.304987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.961 [2024-07-14 21:26:30.481911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.219 [2024-07-14 21:26:30.725169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.219 [2024-07-14 21:26:30.725247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.219 [2024-07-14 21:26:30.725267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.219 [2024-07-14 21:26:30.725284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.219 [2024-07-14 21:26:30.725308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.219 [2024-07-14 21:26:30.725355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.478 [2024-07-14 21:26:30.927508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:19.736 21:26:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.736 21:26:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.736 21:26:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:19.736 21:26:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.736 [2024-07-14 21:26:31.279017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.736 21:26:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.736 21:26:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.995 ************************************ 00:27:19.995 START TEST fio_dif_1_default 00:27:19.995 ************************************ 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.995 bdev_null0 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.995 [2024-07-14 21:26:31.323248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.995 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.996 { 00:27:19.996 "params": { 00:27:19.996 "name": "Nvme$subsystem", 00:27:19.996 "trtype": "$TEST_TRANSPORT", 00:27:19.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.996 "adrfam": "ipv4", 00:27:19.996 "trsvcid": "$NVMF_PORT", 00:27:19.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.996 "hdgst": ${hdgst:-false}, 00:27:19.996 "ddgst": ${ddgst:-false} 00:27:19.996 }, 00:27:19.996 "method": "bdev_nvme_attach_controller" 00:27:19.996 } 00:27:19.996 EOF 00:27:19.996 )") 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:19.996 "params": { 00:27:19.996 "name": "Nvme0", 00:27:19.996 "trtype": "tcp", 00:27:19.996 "traddr": "10.0.0.2", 00:27:19.996 "adrfam": "ipv4", 00:27:19.996 "trsvcid": "4420", 00:27:19.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:19.996 "hdgst": false, 00:27:19.996 "ddgst": false 00:27:19.996 }, 00:27:19.996 "method": "bdev_nvme_attach_controller" 00:27:19.996 }' 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:19.996 21:26:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.254 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:20.254 fio-3.35 00:27:20.254 Starting 1 thread 00:27:32.449 00:27:32.449 filename0: (groupid=0, jobs=1): err= 0: pid=89199: Sun Jul 14 21:26:42 2024 00:27:32.449 read: IOPS=6273, BW=24.5MiB/s (25.7MB/s)(245MiB/10001msec) 00:27:32.449 slat (nsec): min=7751, max=78152, avg=12759.51, stdev=6708.77 00:27:32.449 clat (usec): min=445, max=2582, avg=599.13, stdev=58.31 00:27:32.450 lat (usec): min=454, max=2610, avg=611.89, stdev=59.60 00:27:32.450 clat percentiles (usec): 00:27:32.450 | 1.00th=[ 486], 5.00th=[ 515], 10.00th=[ 529], 20.00th=[ 553], 00:27:32.450 | 30.00th=[ 570], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 611], 00:27:32.450 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 668], 95.00th=[ 685], 00:27:32.450 | 99.00th=[ 734], 99.50th=[ 791], 99.90th=[ 947], 99.95th=[ 1090], 00:27:32.450 | 99.99th=[ 1418] 00:27:32.450 bw ( KiB/s): min=23920, max=25824, per=100.00%, avg=25109.05, stdev=451.98, samples=19 00:27:32.450 iops : min= 5980, max= 6456, avg=6277.26, stdev=113.00, samples=19 00:27:32.450 lat (usec) : 500=2.75%, 750=96.58%, 1000=0.62% 00:27:32.450 lat (msec) : 2=0.05%, 4=0.01% 00:27:32.450 cpu : usr=85.46%, sys=12.49%, ctx=29, majf=0, minf=1062 00:27:32.450 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.450 issued rwts: total=62740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.450 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:32.450 00:27:32.450 Run status group 0 (all jobs): 00:27:32.450 READ: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=245MiB (257MB), run=10001-10001msec 00:27:32.450 ----------------------------------------------------- 00:27:32.450 Suppressions used: 00:27:32.450 count bytes template 00:27:32.450 1 8 /usr/src/fio/parse.c 00:27:32.450 1 8 libtcmalloc_minimal.so 00:27:32.450 1 904 libcrypto.so 00:27:32.450 ----------------------------------------------------- 00:27:32.450 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 00:27:32.450 real 0m12.300s 00:27:32.450 user 0m10.422s 00:27:32.450 sys 0m1.610s 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.450 ************************************ 00:27:32.450 END TEST fio_dif_1_default 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 ************************************ 00:27:32.450 21:26:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:32.450 21:26:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:32.450 21:26:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:32.450 21:26:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 ************************************ 00:27:32.450 START TEST fio_dif_1_multi_subsystems 00:27:32.450 ************************************ 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 bdev_null0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 [2024-07-14 21:26:43.678280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 bdev_null1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.450 { 00:27:32.450 "params": { 00:27:32.450 "name": "Nvme$subsystem", 00:27:32.450 "trtype": "$TEST_TRANSPORT", 00:27:32.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.450 "adrfam": "ipv4", 00:27:32.450 "trsvcid": "$NVMF_PORT", 00:27:32.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.450 "hdgst": ${hdgst:-false}, 00:27:32.450 "ddgst": ${ddgst:-false} 00:27:32.450 }, 00:27:32.450 "method": "bdev_nvme_attach_controller" 00:27:32.450 } 00:27:32.450 EOF 00:27:32.450 )") 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:32.450 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.451 { 00:27:32.451 "params": { 00:27:32.451 "name": "Nvme$subsystem", 00:27:32.451 "trtype": "$TEST_TRANSPORT", 00:27:32.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.451 "adrfam": "ipv4", 00:27:32.451 "trsvcid": "$NVMF_PORT", 00:27:32.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.451 "hdgst": ${hdgst:-false}, 00:27:32.451 "ddgst": ${ddgst:-false} 00:27:32.451 }, 00:27:32.451 "method": "bdev_nvme_attach_controller" 00:27:32.451 } 00:27:32.451 EOF 00:27:32.451 )") 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.451 "params": { 00:27:32.451 "name": "Nvme0", 00:27:32.451 "trtype": "tcp", 00:27:32.451 "traddr": "10.0.0.2", 00:27:32.451 "adrfam": "ipv4", 00:27:32.451 "trsvcid": "4420", 00:27:32.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.451 "hdgst": false, 00:27:32.451 "ddgst": false 00:27:32.451 }, 00:27:32.451 "method": "bdev_nvme_attach_controller" 00:27:32.451 },{ 00:27:32.451 "params": { 00:27:32.451 "name": "Nvme1", 00:27:32.451 "trtype": "tcp", 00:27:32.451 "traddr": "10.0.0.2", 00:27:32.451 "adrfam": "ipv4", 00:27:32.451 "trsvcid": "4420", 00:27:32.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.451 "hdgst": false, 00:27:32.451 "ddgst": false 00:27:32.451 }, 00:27:32.451 "method": "bdev_nvme_attach_controller" 00:27:32.451 }' 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:32.451 21:26:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.451 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:32.451 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:32.451 fio-3.35 00:27:32.451 Starting 2 threads 00:27:44.682 00:27:44.682 filename0: (groupid=0, jobs=1): err= 0: pid=89357: Sun Jul 14 21:26:54 2024 00:27:44.682 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(147MiB/10001msec) 00:27:44.682 slat (nsec): min=7809, max=73555, avg=16739.18, stdev=6096.58 00:27:44.682 clat (usec): min=708, max=1793, avg=1015.54, stdev=116.82 00:27:44.682 lat (usec): min=717, max=1827, avg=1032.28, stdev=118.62 00:27:44.682 clat percentiles (usec): 00:27:44.682 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 857], 20.00th=[ 906], 00:27:44.682 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1020], 60.00th=[ 1057], 00:27:44.682 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:27:44.682 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1500], 99.95th=[ 1598], 00:27:44.682 | 99.99th=[ 1745] 00:27:44.682 bw ( KiB/s): min=14016, max=17248, per=50.21%, avg=15130.95, stdev=1276.41, samples=19 00:27:44.682 iops : min= 3504, max= 4312, avg=3782.74, stdev=319.10, samples=19 00:27:44.682 lat (usec) : 750=0.23%, 1000=42.68% 00:27:44.682 lat (msec) : 2=57.10% 00:27:44.682 cpu : usr=90.41%, sys=8.15%, ctx=22, majf=0, minf=1063 00:27:44.682 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:44.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.682 issued rwts: total=37672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.682 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:44.682 filename1: (groupid=0, jobs=1): err= 0: pid=89358: Sun Jul 14 21:26:54 2024 00:27:44.682 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(147MiB/10002msec) 00:27:44.682 slat (nsec): min=7762, max=76603, avg=16750.50, stdev=6251.68 00:27:44.682 clat (usec): min=762, max=1767, avg=1014.71, stdev=111.43 00:27:44.682 lat (usec): min=775, max=1822, avg=1031.46, stdev=113.03 00:27:44.682 clat percentiles (usec): 00:27:44.682 | 1.00th=[ 807], 5.00th=[ 840], 10.00th=[ 857], 20.00th=[ 898], 00:27:44.682 | 30.00th=[ 955], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1057], 00:27:44.682 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:27:44.682 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1500], 99.95th=[ 1598], 00:27:44.682 | 99.99th=[ 1729] 00:27:44.682 bw ( KiB/s): min=14016, max=17248, per=50.21%, avg=15130.95, stdev=1276.41, samples=19 00:27:44.682 iops : min= 3504, max= 4312, avg=3782.74, stdev=319.10, samples=19 00:27:44.682 lat (usec) : 1000=42.64% 00:27:44.682 lat (msec) : 2=57.36% 00:27:44.682 cpu : usr=90.75%, sys=7.72%, ctx=13, majf=0, minf=1063 00:27:44.682 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:44.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.682 issued rwts: total=37672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.682 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:44.682 00:27:44.682 Run status group 0 (all jobs): 00:27:44.682 READ: bw=29.4MiB/s (30.9MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=294MiB (309MB), run=10001-10002msec 00:27:44.682 ----------------------------------------------------- 00:27:44.682 Suppressions used: 00:27:44.682 count bytes template 00:27:44.682 2 16 /usr/src/fio/parse.c 00:27:44.682 1 8 libtcmalloc_minimal.so 00:27:44.682 1 904 libcrypto.so 00:27:44.682 ----------------------------------------------------- 00:27:44.682 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:44.682 ************************************ 00:27:44.682 END TEST fio_dif_1_multi_subsystems 00:27:44.682 ************************************ 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.682 00:27:44.682 real 0m12.539s 00:27:44.682 user 0m20.212s 00:27:44.682 sys 0m1.944s 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.682 21:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:44.941 21:26:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:44.941 21:26:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:44.941 21:26:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:44.941 21:26:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.941 21:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:44.941 ************************************ 00:27:44.941 START TEST fio_dif_rand_params 00:27:44.941 ************************************ 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:44.941 bdev_null0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:44.941 [2024-07-14 21:26:56.274286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.941 { 00:27:44.941 "params": { 00:27:44.941 "name": "Nvme$subsystem", 00:27:44.941 "trtype": "$TEST_TRANSPORT", 00:27:44.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.941 "adrfam": "ipv4", 00:27:44.941 "trsvcid": "$NVMF_PORT", 00:27:44.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.941 "hdgst": ${hdgst:-false}, 00:27:44.941 "ddgst": ${ddgst:-false} 00:27:44.941 }, 00:27:44.941 "method": "bdev_nvme_attach_controller" 00:27:44.941 } 00:27:44.941 EOF 00:27:44.941 )") 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:44.941 "params": { 00:27:44.941 "name": "Nvme0", 00:27:44.941 "trtype": "tcp", 00:27:44.941 "traddr": "10.0.0.2", 00:27:44.941 "adrfam": "ipv4", 00:27:44.941 "trsvcid": "4420", 00:27:44.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.941 "hdgst": false, 00:27:44.941 "ddgst": false 00:27:44.941 }, 00:27:44.941 "method": "bdev_nvme_attach_controller" 00:27:44.941 }' 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:44.941 21:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.200 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:45.200 ... 00:27:45.200 fio-3.35 00:27:45.200 Starting 3 threads 00:27:51.764 00:27:51.764 filename0: (groupid=0, jobs=1): err= 0: pid=89517: Sun Jul 14 21:27:02 2024 00:27:51.764 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(128MiB/5010msec) 00:27:51.764 slat (nsec): min=9482, max=52208, avg=19415.88, stdev=6428.90 00:27:51.764 clat (usec): min=13716, max=20502, avg=14604.27, stdev=766.09 00:27:51.764 lat (usec): min=13731, max=20533, avg=14623.68, stdev=766.90 00:27:51.764 clat percentiles (usec): 00:27:51.764 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13829], 20.00th=[13960], 00:27:51.764 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:27:51.764 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16057], 00:27:51.764 | 99.00th=[16581], 99.50th=[16909], 99.90th=[20579], 99.95th=[20579], 00:27:51.764 | 99.99th=[20579] 00:27:51.764 bw ( KiB/s): min=23760, max=27592, per=33.29%, avg=26178.40, stdev=1056.07, samples=10 00:27:51.764 iops : min= 185, max= 215, avg=204.40, stdev= 8.33, samples=10 00:27:51.764 lat (msec) : 20=99.71%, 50=0.29% 00:27:51.764 cpu : usr=92.83%, sys=6.51%, ctx=7, majf=0, minf=1074 00:27:51.764 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.764 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.764 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:51.764 filename0: (groupid=0, jobs=1): err= 0: pid=89518: Sun Jul 14 21:27:02 2024 00:27:51.764 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(128MiB/5009msec) 00:27:51.764 slat (nsec): min=6385, max=53274, avg=19600.24, stdev=5907.29 00:27:51.764 clat (usec): min=13721, max=18868, avg=14599.23, stdev=733.21 00:27:51.764 lat (usec): min=13736, max=18891, avg=14618.84, stdev=734.04 00:27:51.764 clat percentiles (usec): 00:27:51.764 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13829], 20.00th=[13960], 00:27:51.764 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:27:51.764 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[15926], 00:27:51.764 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18744], 99.95th=[18744], 00:27:51.764 | 99.99th=[18744] 00:27:51.764 bw ( KiB/s): min=23855, max=27592, per=33.30%, avg=26187.90, stdev=1032.05, samples=10 00:27:51.764 iops : min= 186, max= 215, avg=204.50, stdev= 8.07, samples=10 00:27:51.764 lat (msec) : 20=100.00% 00:27:51.764 cpu : usr=92.25%, sys=7.11%, ctx=12, majf=0, minf=1075 00:27:51.764 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.764 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.764 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:51.764 filename0: (groupid=0, jobs=1): err= 0: pid=89519: Sun Jul 14 21:27:02 2024 00:27:51.764 read: IOPS=204, BW=25.6MiB/s (26.9MB/s)(128MiB/5007msec) 00:27:51.764 slat (nsec): min=5564, max=54871, avg=19860.42, stdev=6204.83 00:27:51.764 clat (usec): min=13718, max=17024, avg=14591.72, stdev=704.47 00:27:51.764 lat (usec): min=13733, max=17046, avg=14611.58, stdev=705.39 00:27:51.764 clat percentiles (usec): 00:27:51.764 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13829], 20.00th=[13960], 00:27:51.764 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:27:51.764 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:27:51.764 | 99.00th=[16450], 99.50th=[16581], 99.90th=[16909], 99.95th=[16909], 00:27:51.764 | 99.99th=[16909] 00:27:51.764 bw ( KiB/s): min=23808, max=26880, per=33.29%, avg=26183.40, stdev=984.12, samples=10 00:27:51.764 iops : min= 186, max= 210, avg=204.50, stdev= 7.65, samples=10 00:27:51.764 lat (msec) : 20=100.00% 00:27:51.764 cpu : usr=92.25%, sys=7.13%, ctx=9, majf=0, minf=1072 00:27:51.765 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.765 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.765 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:51.765 00:27:51.765 Run status group 0 (all jobs): 00:27:51.765 READ: bw=76.8MiB/s (80.5MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.9MB/s), io=385MiB (403MB), run=5007-5010msec 00:27:52.329 ----------------------------------------------------- 00:27:52.329 Suppressions used: 00:27:52.329 count bytes template 00:27:52.329 5 44 /usr/src/fio/parse.c 00:27:52.329 1 8 libtcmalloc_minimal.so 00:27:52.329 1 904 libcrypto.so 00:27:52.329 ----------------------------------------------------- 00:27:52.329 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:52.329 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 bdev_null0 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 [2024-07-14 21:27:03.673333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 bdev_null1 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 bdev_null2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.330 { 00:27:52.330 "params": { 00:27:52.330 "name": "Nvme$subsystem", 00:27:52.330 "trtype": "$TEST_TRANSPORT", 00:27:52.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.330 "adrfam": "ipv4", 00:27:52.330 "trsvcid": "$NVMF_PORT", 00:27:52.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.330 "hdgst": ${hdgst:-false}, 00:27:52.330 "ddgst": ${ddgst:-false} 00:27:52.330 }, 00:27:52.330 "method": "bdev_nvme_attach_controller" 00:27:52.330 } 00:27:52.330 EOF 00:27:52.330 )") 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.330 { 00:27:52.330 "params": { 00:27:52.330 "name": "Nvme$subsystem", 00:27:52.330 "trtype": "$TEST_TRANSPORT", 00:27:52.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.330 "adrfam": "ipv4", 00:27:52.330 "trsvcid": "$NVMF_PORT", 00:27:52.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.330 "hdgst": ${hdgst:-false}, 00:27:52.330 "ddgst": ${ddgst:-false} 00:27:52.330 }, 00:27:52.330 "method": "bdev_nvme_attach_controller" 00:27:52.330 } 00:27:52.330 EOF 00:27:52.330 )") 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.330 { 00:27:52.330 "params": { 00:27:52.330 "name": "Nvme$subsystem", 00:27:52.330 "trtype": "$TEST_TRANSPORT", 00:27:52.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.330 "adrfam": "ipv4", 00:27:52.330 "trsvcid": "$NVMF_PORT", 00:27:52.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.330 "hdgst": ${hdgst:-false}, 00:27:52.330 "ddgst": ${ddgst:-false} 00:27:52.330 }, 00:27:52.330 "method": "bdev_nvme_attach_controller" 00:27:52.330 } 00:27:52.330 EOF 00:27:52.330 )") 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:52.330 21:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:52.330 "params": { 00:27:52.330 "name": "Nvme0", 00:27:52.330 "trtype": "tcp", 00:27:52.330 "traddr": "10.0.0.2", 00:27:52.330 "adrfam": "ipv4", 00:27:52.330 "trsvcid": "4420", 00:27:52.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.331 "hdgst": false, 00:27:52.331 "ddgst": false 00:27:52.331 }, 00:27:52.331 "method": "bdev_nvme_attach_controller" 00:27:52.331 },{ 00:27:52.331 "params": { 00:27:52.331 "name": "Nvme1", 00:27:52.331 "trtype": "tcp", 00:27:52.331 "traddr": "10.0.0.2", 00:27:52.331 "adrfam": "ipv4", 00:27:52.331 "trsvcid": "4420", 00:27:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.331 "hdgst": false, 00:27:52.331 "ddgst": false 00:27:52.331 }, 00:27:52.331 "method": "bdev_nvme_attach_controller" 00:27:52.331 },{ 00:27:52.331 "params": { 00:27:52.331 "name": "Nvme2", 00:27:52.331 "trtype": "tcp", 00:27:52.331 "traddr": "10.0.0.2", 00:27:52.331 "adrfam": "ipv4", 00:27:52.331 "trsvcid": "4420", 00:27:52.331 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:52.331 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:52.331 "hdgst": false, 00:27:52.331 "ddgst": false 00:27:52.331 }, 00:27:52.331 "method": "bdev_nvme_attach_controller" 00:27:52.331 }' 00:27:52.331 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:52.331 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:52.331 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:52.331 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:52.331 21:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.588 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:52.588 ... 00:27:52.588 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:52.588 ... 00:27:52.588 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:52.588 ... 00:27:52.588 fio-3.35 00:27:52.588 Starting 24 threads 00:28:04.839 00:28:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=89622: Sun Jul 14 21:27:15 2024 00:28:04.839 read: IOPS=149, BW=599KiB/s (613kB/s)(5996KiB/10014msec) 00:28:04.839 slat (usec): min=8, max=8042, avg=37.47, stdev=394.25 00:28:04.839 clat (msec): min=36, max=258, avg=106.59, stdev=30.56 00:28:04.839 lat (msec): min=36, max=258, avg=106.63, stdev=30.57 00:28:04.839 clat percentiles (msec): 00:28:04.839 | 1.00th=[ 39], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 74], 00:28:04.839 | 30.00th=[ 87], 40.00th=[ 108], 50.00th=[ 121], 60.00th=[ 121], 00:28:04.839 | 70.00th=[ 127], 80.00th=[ 132], 90.00th=[ 132], 95.00th=[ 134], 00:28:04.839 | 99.00th=[ 211], 99.50th=[ 211], 99.90th=[ 259], 99.95th=[ 259], 00:28:04.839 | 99.99th=[ 259] 00:28:04.839 bw ( KiB/s): min= 380, max= 848, per=4.04%, avg=582.32, stdev=148.84, samples=19 00:28:04.839 iops : min= 95, max= 212, avg=145.58, stdev=37.21, samples=19 00:28:04.839 lat (msec) : 50=4.47%, 100=34.42%, 250=60.97%, 500=0.13% 00:28:04.839 cpu : usr=30.92%, sys=2.23%, ctx=1162, majf=0, minf=1074 00:28:04.839 IO depths : 1=0.1%, 2=3.5%, 4=13.8%, 8=68.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:28:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 complete : 0=0.0%, 4=90.9%, 8=6.1%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=89623: Sun Jul 14 21:27:15 2024 00:28:04.839 read: IOPS=164, BW=660KiB/s (676kB/s)(6604KiB/10007msec) 00:28:04.839 slat (usec): min=5, max=4033, avg=21.54, stdev=125.87 00:28:04.839 clat (msec): min=3, max=233, avg=96.85, stdev=31.67 00:28:04.839 lat (msec): min=4, max=233, avg=96.87, stdev=31.67 00:28:04.839 clat percentiles (msec): 00:28:04.839 | 1.00th=[ 8], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 69], 00:28:04.839 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 109], 00:28:04.839 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 136], 00:28:04.839 | 99.00th=[ 176], 99.50th=[ 197], 99.90th=[ 234], 99.95th=[ 234], 00:28:04.839 | 99.99th=[ 234] 00:28:04.839 bw ( KiB/s): min= 380, max= 840, per=4.38%, avg=631.79, stdev=121.72, samples=19 00:28:04.839 iops : min= 95, max= 210, avg=157.95, stdev=30.43, samples=19 00:28:04.839 lat (msec) : 4=0.06%, 10=1.57%, 20=0.67%, 50=3.21%, 100=47.91% 00:28:04.839 lat (msec) : 250=46.58% 00:28:04.839 cpu : usr=40.03%, sys=2.77%, ctx=1269, majf=0, minf=1072 00:28:04.839 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 issued rwts: total=1651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=89624: Sun Jul 14 21:27:15 2024 00:28:04.839 read: IOPS=166, BW=666KiB/s (682kB/s)(6716KiB/10090msec) 00:28:04.839 slat (usec): min=5, max=8046, avg=25.18, stdev=276.92 00:28:04.839 clat (msec): min=13, max=191, avg=95.79, stdev=28.68 00:28:04.839 lat (msec): min=13, max=191, avg=95.82, stdev=28.67 00:28:04.839 clat percentiles (msec): 00:28:04.839 | 1.00th=[ 16], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 72], 00:28:04.839 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 108], 00:28:04.839 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 132], 95.00th=[ 132], 00:28:04.839 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 192], 00:28:04.839 | 99.99th=[ 192] 00:28:04.839 bw ( KiB/s): min= 560, max= 1064, per=4.61%, avg=664.85, stdev=130.25, samples=20 00:28:04.839 iops : min= 140, max= 266, avg=166.20, stdev=32.55, samples=20 00:28:04.839 lat (msec) : 20=1.91%, 50=4.65%, 100=49.08%, 250=44.37% 00:28:04.839 cpu : usr=32.09%, sys=2.07%, ctx=891, majf=0, minf=1073 00:28:04.839 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=89625: Sun Jul 14 21:27:15 2024 00:28:04.839 read: IOPS=135, BW=542KiB/s (555kB/s)(5440KiB/10033msec) 00:28:04.839 slat (nsec): min=6600, max=38054, avg=15090.47, stdev=5609.23 00:28:04.839 clat (msec): min=36, max=230, avg=117.86, stdev=23.20 00:28:04.839 lat (msec): min=36, max=230, avg=117.88, stdev=23.20 00:28:04.839 clat percentiles (msec): 00:28:04.839 | 1.00th=[ 59], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 93], 00:28:04.839 | 30.00th=[ 109], 40.00th=[ 121], 50.00th=[ 121], 60.00th=[ 126], 00:28:04.839 | 70.00th=[ 132], 80.00th=[ 132], 90.00th=[ 136], 95.00th=[ 157], 00:28:04.839 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 230], 99.95th=[ 232], 00:28:04.839 | 99.99th=[ 232] 00:28:04.839 bw ( KiB/s): min= 384, max= 657, per=3.73%, avg=537.90, stdev=95.85, samples=20 00:28:04.839 iops : min= 96, max= 164, avg=134.40, stdev=23.91, samples=20 00:28:04.839 lat (msec) : 50=0.15%, 100=24.85%, 250=75.00% 00:28:04.839 cpu : usr=33.41%, sys=2.28%, ctx=1213, majf=0, minf=1074 00:28:04.839 IO depths : 1=0.1%, 2=6.2%, 4=24.4%, 8=56.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:28:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 complete : 0=0.0%, 4=94.3%, 8=0.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=89626: Sun Jul 14 21:27:15 2024 00:28:04.839 read: IOPS=168, BW=673KiB/s (689kB/s)(6776KiB/10064msec) 00:28:04.839 slat (usec): min=5, max=8037, avg=28.72, stdev=292.10 00:28:04.839 clat (msec): min=15, max=143, avg=94.73, stdev=27.17 00:28:04.839 lat (msec): min=15, max=144, avg=94.76, stdev=27.17 00:28:04.839 clat percentiles (msec): 00:28:04.839 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 70], 00:28:04.839 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 106], 00:28:04.839 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 132], 00:28:04.839 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:28:04.839 | 99.99th=[ 144] 00:28:04.839 bw ( KiB/s): min= 560, max= 872, per=4.68%, avg=673.60, stdev=100.14, samples=20 00:28:04.839 iops : min= 140, max= 218, avg=168.40, stdev=25.04, samples=20 00:28:04.839 lat (msec) : 20=0.12%, 50=5.96%, 100=51.18%, 250=42.74% 00:28:04.839 cpu : usr=41.39%, sys=2.77%, ctx=1205, majf=0, minf=1075 00:28:04.839 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.839 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename0: (groupid=0, jobs=1): err= 0: pid=89627: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=144, BW=580KiB/s (594kB/s)(5828KiB/10051msec) 00:28:04.840 slat (usec): min=4, max=8036, avg=27.86, stdev=297.02 00:28:04.840 clat (msec): min=26, max=216, avg=110.07, stdev=30.12 00:28:04.840 lat (msec): min=26, max=216, avg=110.10, stdev=30.12 00:28:04.840 clat percentiles (msec): 00:28:04.840 | 1.00th=[ 36], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 84], 00:28:04.840 | 30.00th=[ 95], 40.00th=[ 108], 50.00th=[ 121], 60.00th=[ 124], 00:28:04.840 | 70.00th=[ 130], 80.00th=[ 132], 90.00th=[ 136], 95.00th=[ 157], 00:28:04.840 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 218], 99.95th=[ 218], 00:28:04.840 | 99.99th=[ 218] 00:28:04.840 bw ( KiB/s): min= 384, max= 872, per=4.00%, avg=576.40, stdev=147.77, samples=20 00:28:04.840 iops : min= 96, max= 218, avg=144.10, stdev=36.94, samples=20 00:28:04.840 lat (msec) : 50=2.61%, 100=32.19%, 250=65.20% 00:28:04.840 cpu : usr=32.09%, sys=2.22%, ctx=912, majf=0, minf=1075 00:28:04.840 IO depths : 1=0.1%, 2=4.4%, 4=17.5%, 8=64.5%, 16=13.5%, 32=0.0%, >=64=0.0% 00:28:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 complete : 0=0.0%, 4=92.1%, 8=4.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 issued rwts: total=1457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename0: (groupid=0, jobs=1): err= 0: pid=89628: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=145, BW=584KiB/s (598kB/s)(5844KiB/10011msec) 00:28:04.840 slat (usec): min=5, max=8033, avg=27.40, stdev=253.27 00:28:04.840 clat (msec): min=11, max=260, avg=109.46, stdev=30.89 00:28:04.840 lat (msec): min=11, max=260, avg=109.49, stdev=30.90 00:28:04.840 clat percentiles (msec): 00:28:04.840 | 1.00th=[ 32], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 84], 00:28:04.840 | 30.00th=[ 94], 40.00th=[ 115], 50.00th=[ 120], 60.00th=[ 124], 00:28:04.840 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 134], 95.00th=[ 136], 00:28:04.840 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 262], 00:28:04.840 | 99.99th=[ 262] 00:28:04.840 bw ( KiB/s): min= 368, max= 824, per=3.91%, avg=562.32, stdev=130.29, samples=19 00:28:04.840 iops : min= 92, max= 206, avg=140.58, stdev=32.57, samples=19 00:28:04.840 lat (msec) : 20=0.89%, 50=0.89%, 100=30.80%, 250=67.28%, 500=0.14% 00:28:04.840 cpu : usr=39.53%, sys=2.60%, ctx=1248, majf=0, minf=1075 00:28:04.840 IO depths : 1=0.1%, 2=4.3%, 4=17.2%, 8=64.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:28:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 complete : 0=0.0%, 4=92.0%, 8=4.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 issued rwts: total=1461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename0: (groupid=0, jobs=1): err= 0: pid=89629: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=143, BW=574KiB/s (588kB/s)(5776KiB/10061msec) 00:28:04.840 slat (usec): min=5, max=8047, avg=32.22, stdev=365.65 00:28:04.840 clat (msec): min=10, max=216, avg=111.05, stdev=32.66 00:28:04.840 lat (msec): min=10, max=216, avg=111.08, stdev=32.67 00:28:04.840 clat percentiles (msec): 00:28:04.840 | 1.00th=[ 13], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 85], 00:28:04.840 | 30.00th=[ 96], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 121], 00:28:04.840 | 70.00th=[ 129], 80.00th=[ 132], 90.00th=[ 140], 95.00th=[ 157], 00:28:04.840 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 218], 00:28:04.840 | 99.99th=[ 218] 00:28:04.840 bw ( KiB/s): min= 384, max= 876, per=3.98%, avg=573.80, stdev=145.69, samples=20 00:28:04.840 iops : min= 96, max= 219, avg=143.45, stdev=36.42, samples=20 00:28:04.840 lat (msec) : 20=1.25%, 50=2.70%, 100=28.67%, 250=67.38% 00:28:04.840 cpu : usr=32.00%, sys=2.18%, ctx=887, majf=0, minf=1075 00:28:04.840 IO depths : 1=0.1%, 2=5.0%, 4=19.9%, 8=61.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 complete : 0=0.0%, 4=92.8%, 8=2.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 issued rwts: total=1444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename1: (groupid=0, jobs=1): err= 0: pid=89630: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=137, BW=550KiB/s (563kB/s)(5504KiB/10003msec) 00:28:04.840 slat (usec): min=5, max=8034, avg=25.72, stdev=241.71 00:28:04.840 clat (msec): min=33, max=249, avg=116.13, stdev=21.89 00:28:04.840 lat (msec): min=33, max=249, avg=116.15, stdev=21.88 00:28:04.840 clat percentiles (msec): 00:28:04.840 | 1.00th=[ 65], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 95], 00:28:04.840 | 30.00th=[ 105], 40.00th=[ 117], 50.00th=[ 122], 60.00th=[ 125], 00:28:04.840 | 70.00th=[ 129], 80.00th=[ 132], 90.00th=[ 134], 95.00th=[ 138], 00:28:04.840 | 99.00th=[ 203], 99.50th=[ 203], 99.90th=[ 251], 99.95th=[ 251], 00:28:04.840 | 99.99th=[ 251] 00:28:04.840 bw ( KiB/s): min= 384, max= 752, per=3.73%, avg=537.89, stdev=87.74, samples=19 00:28:04.840 iops : min= 96, max= 188, avg=134.47, stdev=21.94, samples=19 00:28:04.840 lat (msec) : 50=0.15%, 100=27.47%, 250=72.38% 00:28:04.840 cpu : usr=42.42%, sys=2.95%, ctx=1242, majf=0, minf=1073 00:28:04.840 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename1: (groupid=0, jobs=1): err= 0: pid=89631: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=163, BW=653KiB/s (669kB/s)(6588KiB/10089msec) 00:28:04.840 slat (usec): min=4, max=8045, avg=26.00, stdev=279.55 00:28:04.840 clat (usec): min=1926, max=185988, avg=97684.18, stdev=35826.10 00:28:04.840 lat (usec): min=1935, max=186008, avg=97710.18, stdev=35831.71 00:28:04.840 clat percentiles (msec): 00:28:04.840 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 49], 20.00th=[ 72], 00:28:04.840 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 121], 00:28:04.840 | 70.00th=[ 121], 80.00th=[ 129], 90.00th=[ 132], 95.00th=[ 134], 00:28:04.840 | 99.00th=[ 157], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 186], 00:28:04.840 | 99.99th=[ 186] 00:28:04.840 bw ( KiB/s): min= 400, max= 1523, per=4.53%, avg=652.20, stdev=237.64, samples=20 00:28:04.840 iops : min= 100, max= 380, avg=163.00, stdev=59.26, samples=20 00:28:04.840 lat (msec) : 2=0.49%, 4=0.49%, 10=1.94%, 20=3.89%, 50=3.70% 00:28:04.840 lat (msec) : 100=38.92%, 250=50.58% 00:28:04.840 cpu : usr=32.11%, sys=2.23%, ctx=909, majf=0, minf=1075 00:28:04.840 IO depths : 1=0.4%, 2=2.4%, 4=8.2%, 8=74.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:28:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 issued rwts: total=1647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename1: (groupid=0, jobs=1): err= 0: pid=89632: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=142, BW=570KiB/s (584kB/s)(5740KiB/10071msec) 00:28:04.840 slat (usec): min=5, max=8086, avg=28.50, stdev=300.37 00:28:04.840 clat (msec): min=23, max=201, avg=111.83, stdev=29.54 00:28:04.840 lat (msec): min=23, max=201, avg=111.86, stdev=29.55 00:28:04.840 clat percentiles (msec): 00:28:04.840 | 1.00th=[ 32], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 86], 00:28:04.840 | 30.00th=[ 96], 40.00th=[ 113], 50.00th=[ 121], 60.00th=[ 125], 00:28:04.840 | 70.00th=[ 130], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 161], 00:28:04.840 | 99.00th=[ 182], 99.50th=[ 182], 99.90th=[ 203], 99.95th=[ 203], 00:28:04.840 | 99.99th=[ 203] 00:28:04.840 bw ( KiB/s): min= 400, max= 840, per=3.94%, avg=567.30, stdev=130.67, samples=20 00:28:04.840 iops : min= 100, max= 210, avg=141.80, stdev=32.63, samples=20 00:28:04.840 lat (msec) : 50=2.09%, 100=30.59%, 250=67.32% 00:28:04.840 cpu : usr=38.39%, sys=2.88%, ctx=1262, majf=0, minf=1073 00:28:04.840 IO depths : 1=0.1%, 2=5.0%, 4=20.0%, 8=61.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 complete : 0=0.0%, 4=92.9%, 8=2.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.840 issued rwts: total=1435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.840 filename1: (groupid=0, jobs=1): err= 0: pid=89633: Sun Jul 14 21:27:15 2024 00:28:04.840 read: IOPS=139, BW=559KiB/s (572kB/s)(5612KiB/10040msec) 00:28:04.840 slat (usec): min=9, max=8041, avg=28.19, stdev=302.86 00:28:04.840 clat (msec): min=47, max=245, avg=114.17, stdev=27.22 00:28:04.840 lat (msec): min=47, max=245, avg=114.20, stdev=27.21 00:28:04.841 clat percentiles (msec): 00:28:04.841 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 74], 20.00th=[ 86], 00:28:04.841 | 30.00th=[ 103], 40.00th=[ 116], 50.00th=[ 121], 60.00th=[ 124], 00:28:04.841 | 70.00th=[ 131], 80.00th=[ 132], 90.00th=[ 133], 95.00th=[ 157], 00:28:04.841 | 99.00th=[ 199], 99.50th=[ 199], 99.90th=[ 245], 99.95th=[ 245], 00:28:04.841 | 99.99th=[ 245] 00:28:04.841 bw ( KiB/s): min= 384, max= 790, per=3.85%, avg=554.70, stdev=115.93, samples=20 00:28:04.841 iops : min= 96, max= 197, avg=138.65, stdev=28.93, samples=20 00:28:04.841 lat (msec) : 50=0.43%, 100=29.51%, 250=70.06% 00:28:04.841 cpu : usr=31.14%, sys=1.99%, ctx=1098, majf=0, minf=1075 00:28:04.841 IO depths : 1=0.1%, 2=5.0%, 4=19.9%, 8=61.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:28:04.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 complete : 0=0.0%, 4=92.9%, 8=2.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 issued rwts: total=1403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.841 filename1: (groupid=0, jobs=1): err= 0: pid=89634: Sun Jul 14 21:27:15 2024 00:28:04.841 read: IOPS=156, BW=625KiB/s (640kB/s)(6256KiB/10014msec) 00:28:04.841 slat (usec): min=4, max=8039, avg=20.84, stdev=202.96 00:28:04.841 clat (msec): min=33, max=275, avg=102.31, stdev=31.91 00:28:04.841 lat (msec): min=33, max=275, avg=102.33, stdev=31.91 00:28:04.841 clat percentiles (msec): 00:28:04.841 | 1.00th=[ 44], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 72], 00:28:04.841 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 105], 60.00th=[ 121], 00:28:04.841 | 70.00th=[ 121], 80.00th=[ 131], 90.00th=[ 132], 95.00th=[ 133], 00:28:04.841 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 275], 00:28:04.841 | 99.99th=[ 275] 00:28:04.841 bw ( KiB/s): min= 253, max= 840, per=4.23%, avg=609.74, stdev=150.12, samples=19 00:28:04.841 iops : min= 63, max= 210, avg=152.42, stdev=37.56, samples=19 00:28:04.841 lat (msec) : 50=3.64%, 100=44.31%, 250=51.02%, 500=1.02% 00:28:04.841 cpu : usr=32.45%, sys=2.33%, ctx=1129, majf=0, minf=1075 00:28:04.841 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:04.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.841 filename1: (groupid=0, jobs=1): err= 0: pid=89635: Sun Jul 14 21:27:15 2024 00:28:04.841 read: IOPS=165, BW=662KiB/s (678kB/s)(6656KiB/10054msec) 00:28:04.841 slat (usec): min=5, max=8042, avg=40.53, stdev=439.06 00:28:04.841 clat (msec): min=27, max=156, avg=96.43, stdev=26.83 00:28:04.841 lat (msec): min=27, max=156, avg=96.47, stdev=26.82 00:28:04.841 clat percentiles (msec): 00:28:04.841 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 72], 00:28:04.841 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 108], 00:28:04.841 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 132], 95.00th=[ 132], 00:28:04.841 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:28:04.841 | 99.99th=[ 157] 00:28:04.841 bw ( KiB/s): min= 536, max= 848, per=4.58%, avg=659.20, stdev=99.32, samples=20 00:28:04.841 iops : min= 134, max= 212, avg=164.80, stdev=24.83, samples=20 00:28:04.841 lat (msec) : 50=4.57%, 100=52.10%, 250=43.33% 00:28:04.841 cpu : usr=31.09%, sys=2.10%, ctx=880, majf=0, minf=1072 00:28:04.841 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:04.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.841 filename1: (groupid=0, jobs=1): err= 0: pid=89636: Sun Jul 14 21:27:15 2024 00:28:04.841 read: IOPS=142, BW=568KiB/s (582kB/s)(5688KiB/10011msec) 00:28:04.841 slat (usec): min=5, max=4034, avg=19.39, stdev=106.70 00:28:04.841 clat (msec): min=11, max=250, avg=112.49, stdev=28.95 00:28:04.841 lat (msec): min=11, max=250, avg=112.51, stdev=28.96 00:28:04.841 clat percentiles (msec): 00:28:04.841 | 1.00th=[ 32], 5.00th=[ 61], 10.00th=[ 74], 20.00th=[ 87], 00:28:04.841 | 30.00th=[ 96], 40.00th=[ 116], 50.00th=[ 121], 60.00th=[ 127], 00:28:04.841 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 153], 00:28:04.841 | 99.00th=[ 203], 99.50th=[ 203], 99.90th=[ 251], 99.95th=[ 251], 00:28:04.841 | 99.99th=[ 251] 00:28:04.841 bw ( KiB/s): min= 384, max= 816, per=3.80%, avg=547.58, stdev=108.44, samples=19 00:28:04.841 iops : min= 96, max= 204, avg=136.89, stdev=27.11, samples=19 00:28:04.841 lat (msec) : 20=0.91%, 50=1.27%, 100=29.40%, 250=68.35%, 500=0.07% 00:28:04.841 cpu : usr=40.77%, sys=2.68%, ctx=1317, majf=0, minf=1072 00:28:04.841 IO depths : 1=0.1%, 2=5.1%, 4=20.2%, 8=61.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:04.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 complete : 0=0.0%, 4=92.9%, 8=2.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.841 filename1: (groupid=0, jobs=1): err= 0: pid=89637: Sun Jul 14 21:27:15 2024 00:28:04.841 read: IOPS=138, BW=556KiB/s (569kB/s)(5560KiB/10006msec) 00:28:04.841 slat (usec): min=5, max=8043, avg=42.83, stdev=384.98 00:28:04.841 clat (msec): min=8, max=316, avg=114.87, stdev=27.09 00:28:04.841 lat (msec): min=8, max=316, avg=114.91, stdev=27.09 00:28:04.841 clat percentiles (msec): 00:28:04.841 | 1.00th=[ 9], 5.00th=[ 80], 10.00th=[ 87], 20.00th=[ 94], 00:28:04.841 | 30.00th=[ 107], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 125], 00:28:04.841 | 70.00th=[ 128], 80.00th=[ 130], 90.00th=[ 134], 95.00th=[ 136], 00:28:04.841 | 99.00th=[ 253], 99.50th=[ 253], 99.90th=[ 317], 99.95th=[ 317], 00:28:04.841 | 99.99th=[ 317] 00:28:04.841 bw ( KiB/s): min= 368, max= 752, per=3.74%, avg=538.11, stdev=89.44, samples=19 00:28:04.841 iops : min= 92, max= 188, avg=134.53, stdev=22.36, samples=19 00:28:04.841 lat (msec) : 10=1.01%, 20=0.14%, 50=1.01%, 100=24.03%, 250=72.66% 00:28:04.841 lat (msec) : 500=1.15% 00:28:04.841 cpu : usr=41.43%, sys=2.71%, ctx=1318, majf=0, minf=1075 00:28:04.841 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:04.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 issued rwts: total=1390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.841 filename2: (groupid=0, jobs=1): err= 0: pid=89638: Sun Jul 14 21:27:15 2024 00:28:04.841 read: IOPS=154, BW=617KiB/s (632kB/s)(6212KiB/10061msec) 00:28:04.841 slat (usec): min=5, max=8037, avg=34.08, stdev=366.58 00:28:04.841 clat (msec): min=8, max=192, avg=103.24, stdev=30.65 00:28:04.841 lat (msec): min=8, max=192, avg=103.28, stdev=30.65 00:28:04.841 clat percentiles (msec): 00:28:04.841 | 1.00th=[ 15], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 73], 00:28:04.841 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 112], 60.00th=[ 121], 00:28:04.841 | 70.00th=[ 124], 80.00th=[ 132], 90.00th=[ 132], 95.00th=[ 133], 00:28:04.841 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:28:04.841 | 99.99th=[ 192] 00:28:04.841 bw ( KiB/s): min= 400, max= 876, per=4.27%, avg=614.60, stdev=146.20, samples=20 00:28:04.841 iops : min= 100, max= 219, avg=153.65, stdev=36.55, samples=20 00:28:04.841 lat (msec) : 10=0.13%, 20=0.90%, 50=4.06%, 100=38.89%, 250=56.02% 00:28:04.841 cpu : usr=32.03%, sys=2.07%, ctx=877, majf=0, minf=1074 00:28:04.841 IO depths : 1=0.1%, 2=3.2%, 4=12.3%, 8=70.2%, 16=14.2%, 32=0.0%, >=64=0.0% 00:28:04.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 complete : 0=0.0%, 4=90.5%, 8=6.8%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.841 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.842 filename2: (groupid=0, jobs=1): err= 0: pid=89639: Sun Jul 14 21:27:15 2024 00:28:04.842 read: IOPS=140, BW=562KiB/s (576kB/s)(5624KiB/10004msec) 00:28:04.842 slat (usec): min=5, max=8033, avg=24.58, stdev=239.30 00:28:04.842 clat (msec): min=5, max=308, avg=113.66, stdev=29.23 00:28:04.842 lat (msec): min=5, max=308, avg=113.69, stdev=29.23 00:28:04.842 clat percentiles (msec): 00:28:04.842 | 1.00th=[ 10], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 93], 00:28:04.842 | 30.00th=[ 108], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 123], 00:28:04.842 | 70.00th=[ 128], 80.00th=[ 132], 90.00th=[ 132], 95.00th=[ 134], 00:28:04.842 | 99.00th=[ 239], 99.50th=[ 239], 99.90th=[ 309], 99.95th=[ 309], 00:28:04.842 | 99.99th=[ 309] 00:28:04.842 bw ( KiB/s): min= 368, max= 752, per=3.74%, avg=538.11, stdev=80.21, samples=19 00:28:04.842 iops : min= 92, max= 188, avg=134.53, stdev=20.05, samples=19 00:28:04.842 lat (msec) : 10=2.13%, 20=0.14%, 50=1.00%, 100=22.90%, 250=73.68% 00:28:04.842 lat (msec) : 500=0.14% 00:28:04.842 cpu : usr=31.15%, sys=2.07%, ctx=1156, majf=0, minf=1075 00:28:04.842 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:04.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 issued rwts: total=1406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.842 filename2: (groupid=0, jobs=1): err= 0: pid=89640: Sun Jul 14 21:27:15 2024 00:28:04.842 read: IOPS=137, BW=550KiB/s (563kB/s)(5504KiB/10007msec) 00:28:04.842 slat (usec): min=5, max=7031, avg=33.28, stdev=287.36 00:28:04.842 clat (msec): min=15, max=319, avg=116.01, stdev=26.23 00:28:04.842 lat (msec): min=15, max=319, avg=116.04, stdev=26.27 00:28:04.842 clat percentiles (msec): 00:28:04.842 | 1.00th=[ 36], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 93], 00:28:04.842 | 30.00th=[ 106], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:28:04.842 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 140], 00:28:04.842 | 99.00th=[ 251], 99.50th=[ 251], 99.90th=[ 313], 99.95th=[ 321], 00:28:04.842 | 99.99th=[ 321] 00:28:04.842 bw ( KiB/s): min= 365, max= 752, per=3.74%, avg=538.79, stdev=92.08, samples=19 00:28:04.842 iops : min= 91, max= 188, avg=134.68, stdev=23.05, samples=19 00:28:04.842 lat (msec) : 20=0.15%, 50=1.02%, 100=25.44%, 250=72.24%, 500=1.16% 00:28:04.842 cpu : usr=39.49%, sys=2.71%, ctx=1203, majf=0, minf=1075 00:28:04.842 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:04.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.842 filename2: (groupid=0, jobs=1): err= 0: pid=89641: Sun Jul 14 21:27:15 2024 00:28:04.842 read: IOPS=161, BW=648KiB/s (663kB/s)(6504KiB/10044msec) 00:28:04.842 slat (usec): min=5, max=8035, avg=27.18, stdev=244.21 00:28:04.842 clat (msec): min=38, max=175, avg=98.58, stdev=27.40 00:28:04.842 lat (msec): min=38, max=175, avg=98.61, stdev=27.39 00:28:04.842 clat percentiles (msec): 00:28:04.842 | 1.00th=[ 46], 5.00th=[ 53], 10.00th=[ 60], 20.00th=[ 72], 00:28:04.842 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 111], 00:28:04.842 | 70.00th=[ 121], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 136], 00:28:04.842 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 176], 00:28:04.842 | 99.99th=[ 176] 00:28:04.842 bw ( KiB/s): min= 400, max= 872, per=4.49%, avg=646.45, stdev=126.22, samples=20 00:28:04.842 iops : min= 100, max= 218, avg=161.60, stdev=31.53, samples=20 00:28:04.842 lat (msec) : 50=4.00%, 100=48.09%, 250=47.91% 00:28:04.842 cpu : usr=40.71%, sys=2.78%, ctx=1232, majf=0, minf=1073 00:28:04.842 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:04.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 issued rwts: total=1626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.842 filename2: (groupid=0, jobs=1): err= 0: pid=89642: Sun Jul 14 21:27:15 2024 00:28:04.842 read: IOPS=169, BW=676KiB/s (693kB/s)(6824KiB/10089msec) 00:28:04.842 slat (usec): min=9, max=5029, avg=31.33, stdev=248.87 00:28:04.842 clat (msec): min=9, max=179, avg=94.25, stdev=29.67 00:28:04.842 lat (msec): min=9, max=179, avg=94.29, stdev=29.67 00:28:04.842 clat percentiles (msec): 00:28:04.842 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 68], 00:28:04.842 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 104], 00:28:04.842 | 70.00th=[ 116], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 132], 00:28:04.842 | 99.00th=[ 140], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:28:04.842 | 99.99th=[ 180] 00:28:04.842 bw ( KiB/s): min= 488, max= 1010, per=4.69%, avg=675.75, stdev=130.69, samples=20 00:28:04.842 iops : min= 122, max= 252, avg=168.90, stdev=32.59, samples=20 00:28:04.842 lat (msec) : 10=0.82%, 20=1.06%, 50=5.04%, 100=49.53%, 250=43.55% 00:28:04.842 cpu : usr=42.41%, sys=2.94%, ctx=1239, majf=0, minf=1072 00:28:04.842 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:04.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 issued rwts: total=1706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.842 filename2: (groupid=0, jobs=1): err= 0: pid=89643: Sun Jul 14 21:27:15 2024 00:28:04.842 read: IOPS=135, BW=544KiB/s (557kB/s)(5440KiB/10008msec) 00:28:04.842 slat (usec): min=5, max=4038, avg=19.17, stdev=109.23 00:28:04.842 clat (msec): min=16, max=246, avg=117.54, stdev=24.28 00:28:04.842 lat (msec): min=16, max=246, avg=117.56, stdev=24.28 00:28:04.842 clat percentiles (msec): 00:28:04.842 | 1.00th=[ 37], 5.00th=[ 81], 10.00th=[ 86], 20.00th=[ 95], 00:28:04.842 | 30.00th=[ 111], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 126], 00:28:04.842 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 150], 00:28:04.842 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 247], 99.95th=[ 247], 00:28:04.842 | 99.99th=[ 247] 00:28:04.842 bw ( KiB/s): min= 384, max= 652, per=3.70%, avg=532.00, stdev=77.09, samples=19 00:28:04.842 iops : min= 96, max= 163, avg=133.00, stdev=19.27, samples=19 00:28:04.842 lat (msec) : 20=0.15%, 50=1.03%, 100=20.29%, 250=78.53% 00:28:04.842 cpu : usr=43.55%, sys=2.98%, ctx=1315, majf=0, minf=1073 00:28:04.842 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:04.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.842 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.842 filename2: (groupid=0, jobs=1): err= 0: pid=89644: Sun Jul 14 21:27:15 2024 00:28:04.842 read: IOPS=166, BW=666KiB/s (682kB/s)(6700KiB/10056msec) 00:28:04.842 slat (usec): min=6, max=8036, avg=26.09, stdev=235.92 00:28:04.842 clat (msec): min=27, max=154, avg=95.85, stdev=26.66 00:28:04.842 lat (msec): min=27, max=154, avg=95.88, stdev=26.66 00:28:04.842 clat percentiles (msec): 00:28:04.842 | 1.00th=[ 39], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 71], 00:28:04.842 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 106], 00:28:04.842 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 132], 00:28:04.842 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:28:04.842 | 99.99th=[ 155] 00:28:04.842 bw ( KiB/s): min= 560, max= 848, per=4.61%, avg=663.60, stdev=95.24, samples=20 00:28:04.842 iops : min= 140, max= 212, avg=165.90, stdev=23.81, samples=20 00:28:04.842 lat (msec) : 50=3.22%, 100=53.43%, 250=43.34% 00:28:04.842 cpu : usr=41.10%, sys=2.53%, ctx=1363, majf=0, minf=1074 00:28:04.842 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:04.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.843 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.843 issued rwts: total=1675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.843 filename2: (groupid=0, jobs=1): err= 0: pid=89645: Sun Jul 14 21:27:15 2024 00:28:04.843 read: IOPS=145, BW=582KiB/s (596kB/s)(5820KiB/10007msec) 00:28:04.843 slat (usec): min=5, max=8034, avg=32.30, stdev=299.77 00:28:04.843 clat (msec): min=11, max=254, avg=109.83, stdev=32.95 00:28:04.843 lat (msec): min=11, max=254, avg=109.87, stdev=32.95 00:28:04.843 clat percentiles (msec): 00:28:04.843 | 1.00th=[ 24], 5.00th=[ 56], 10.00th=[ 64], 20.00th=[ 84], 00:28:04.843 | 30.00th=[ 91], 40.00th=[ 112], 50.00th=[ 120], 60.00th=[ 126], 00:28:04.843 | 70.00th=[ 130], 80.00th=[ 132], 90.00th=[ 134], 95.00th=[ 136], 00:28:04.843 | 99.00th=[ 251], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:28:04.843 | 99.99th=[ 255] 00:28:04.843 bw ( KiB/s): min= 366, max= 848, per=3.88%, avg=559.89, stdev=143.13, samples=19 00:28:04.843 iops : min= 91, max= 212, avg=139.95, stdev=35.82, samples=19 00:28:04.843 lat (msec) : 20=0.89%, 50=1.99%, 100=31.55%, 250=64.47%, 500=1.10% 00:28:04.843 cpu : usr=37.92%, sys=2.87%, ctx=1233, majf=0, minf=1074 00:28:04.843 IO depths : 1=0.1%, 2=4.3%, 4=17.0%, 8=65.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:28:04.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.843 complete : 0=0.0%, 4=92.0%, 8=4.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.843 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:04.843 00:28:04.843 Run status group 0 (all jobs): 00:28:04.843 READ: bw=14.1MiB/s (14.7MB/s), 542KiB/s-676KiB/s (555kB/s-693kB/s), io=142MiB (149MB), run=10003-10090msec 00:28:05.408 ----------------------------------------------------- 00:28:05.408 Suppressions used: 00:28:05.408 count bytes template 00:28:05.408 45 402 /usr/src/fio/parse.c 00:28:05.408 1 8 libtcmalloc_minimal.so 00:28:05.408 1 904 libcrypto.so 00:28:05.408 ----------------------------------------------------- 00:28:05.408 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.408 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 bdev_null0 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 [2024-07-14 21:27:16.835342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 bdev_null1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.409 { 00:28:05.409 "params": { 00:28:05.409 "name": "Nvme$subsystem", 00:28:05.409 "trtype": "$TEST_TRANSPORT", 00:28:05.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.409 "adrfam": "ipv4", 00:28:05.409 "trsvcid": "$NVMF_PORT", 00:28:05.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.409 "hdgst": ${hdgst:-false}, 00:28:05.409 "ddgst": ${ddgst:-false} 00:28:05.409 }, 00:28:05.409 "method": "bdev_nvme_attach_controller" 00:28:05.409 } 00:28:05.409 EOF 00:28:05.409 )") 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.409 { 00:28:05.409 "params": { 00:28:05.409 "name": "Nvme$subsystem", 00:28:05.409 "trtype": "$TEST_TRANSPORT", 00:28:05.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.409 "adrfam": "ipv4", 00:28:05.409 "trsvcid": "$NVMF_PORT", 00:28:05.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.409 "hdgst": ${hdgst:-false}, 00:28:05.409 "ddgst": ${ddgst:-false} 00:28:05.409 }, 00:28:05.409 "method": "bdev_nvme_attach_controller" 00:28:05.409 } 00:28:05.409 EOF 00:28:05.409 )") 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:05.409 21:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.409 "params": { 00:28:05.409 "name": "Nvme0", 00:28:05.409 "trtype": "tcp", 00:28:05.409 "traddr": "10.0.0.2", 00:28:05.409 "adrfam": "ipv4", 00:28:05.409 "trsvcid": "4420", 00:28:05.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:05.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:05.409 "hdgst": false, 00:28:05.409 "ddgst": false 00:28:05.410 }, 00:28:05.410 "method": "bdev_nvme_attach_controller" 00:28:05.410 },{ 00:28:05.410 "params": { 00:28:05.410 "name": "Nvme1", 00:28:05.410 "trtype": "tcp", 00:28:05.410 "traddr": "10.0.0.2", 00:28:05.410 "adrfam": "ipv4", 00:28:05.410 "trsvcid": "4420", 00:28:05.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.410 "hdgst": false, 00:28:05.410 "ddgst": false 00:28:05.410 }, 00:28:05.410 "method": "bdev_nvme_attach_controller" 00:28:05.410 }' 00:28:05.410 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:05.410 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:05.410 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:28:05.410 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:05.410 21:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.668 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:05.668 ... 00:28:05.668 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:05.668 ... 00:28:05.668 fio-3.35 00:28:05.668 Starting 4 threads 00:28:12.266 00:28:12.266 filename0: (groupid=0, jobs=1): err= 0: pid=89787: Sun Jul 14 21:27:23 2024 00:28:12.266 read: IOPS=1557, BW=12.2MiB/s (12.8MB/s)(60.9MiB/5001msec) 00:28:12.266 slat (nsec): min=5390, max=67485, avg=17399.98, stdev=5082.05 00:28:12.266 clat (usec): min=1547, max=10244, avg=5064.66, stdev=419.58 00:28:12.266 lat (usec): min=1562, max=10266, avg=5082.06, stdev=419.46 00:28:12.266 clat percentiles (usec): 00:28:12.266 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 4686], 20.00th=[ 4752], 00:28:12.266 | 30.00th=[ 4817], 40.00th=[ 5080], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:12.266 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5604], 00:28:12.266 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 9110], 99.95th=[ 9241], 00:28:12.266 | 99.99th=[10290] 00:28:12.266 bw ( KiB/s): min=11799, max=13488, per=23.42%, avg=12431.00, stdev=545.88, samples=9 00:28:12.266 iops : min= 1474, max= 1686, avg=1553.78, stdev=68.36, samples=9 00:28:12.266 lat (msec) : 2=0.05%, 4=1.26%, 10=98.68%, 20=0.01% 00:28:12.266 cpu : usr=92.38%, sys=6.74%, ctx=11, majf=0, minf=1072 00:28:12.266 IO depths : 1=0.1%, 2=24.3%, 4=50.5%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 issued rwts: total=7790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.266 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:12.266 filename0: (groupid=0, jobs=1): err= 0: pid=89788: Sun Jul 14 21:27:23 2024 00:28:12.266 read: IOPS=1614, BW=12.6MiB/s (13.2MB/s)(63.1MiB/5005msec) 00:28:12.266 slat (nsec): min=5376, max=57223, avg=16724.03, stdev=4957.72 00:28:12.266 clat (usec): min=1215, max=8742, avg=4892.35, stdev=652.75 00:28:12.266 lat (usec): min=1226, max=8768, avg=4909.07, stdev=653.18 00:28:12.266 clat percentiles (usec): 00:28:12.266 | 1.00th=[ 2008], 5.00th=[ 3228], 10.00th=[ 4621], 20.00th=[ 4686], 00:28:12.266 | 30.00th=[ 4752], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:12.266 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5342], 95.00th=[ 5407], 00:28:12.266 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 5932], 99.95th=[ 6980], 00:28:12.266 | 99.99th=[ 8717] 00:28:12.266 bw ( KiB/s): min=12032, max=14176, per=24.05%, avg=12766.22, stdev=809.64, samples=9 00:28:12.266 iops : min= 1504, max= 1772, avg=1595.78, stdev=101.20, samples=9 00:28:12.266 lat (msec) : 2=0.99%, 4=5.78%, 10=93.23% 00:28:12.266 cpu : usr=91.71%, sys=7.39%, ctx=9, majf=0, minf=1075 00:28:12.266 IO depths : 1=0.1%, 2=21.6%, 4=52.2%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 issued rwts: total=8080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.266 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:12.266 filename1: (groupid=0, jobs=1): err= 0: pid=89789: Sun Jul 14 21:27:23 2024 00:28:12.266 read: IOPS=1580, BW=12.3MiB/s (12.9MB/s)(61.8MiB/5001msec) 00:28:12.266 slat (nsec): min=7851, max=68833, avg=15024.76, stdev=5214.61 00:28:12.266 clat (usec): min=1175, max=11458, avg=4999.20, stdev=614.24 00:28:12.266 lat (usec): min=1185, max=11490, avg=5014.22, stdev=614.52 00:28:12.266 clat percentiles (usec): 00:28:12.266 | 1.00th=[ 2409], 5.00th=[ 4621], 10.00th=[ 4686], 20.00th=[ 4752], 00:28:12.266 | 30.00th=[ 4817], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:12.266 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5342], 95.00th=[ 5473], 00:28:12.266 | 99.00th=[ 6587], 99.50th=[ 7439], 99.90th=[11207], 99.95th=[11207], 00:28:12.266 | 99.99th=[11469] 00:28:12.266 bw ( KiB/s): min=11776, max=14352, per=23.88%, avg=12673.78, stdev=902.00, samples=9 00:28:12.266 iops : min= 1472, max= 1794, avg=1584.22, stdev=112.75, samples=9 00:28:12.266 lat (msec) : 2=0.53%, 4=3.21%, 10=96.15%, 20=0.10% 00:28:12.266 cpu : usr=91.98%, sys=7.12%, ctx=10, majf=0, minf=1074 00:28:12.266 IO depths : 1=0.1%, 2=23.2%, 4=51.2%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 issued rwts: total=7905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.266 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:12.266 filename1: (groupid=0, jobs=1): err= 0: pid=89790: Sun Jul 14 21:27:23 2024 00:28:12.266 read: IOPS=1884, BW=14.7MiB/s (15.4MB/s)(73.7MiB/5002msec) 00:28:12.266 slat (nsec): min=5447, max=65212, avg=17009.04, stdev=5270.34 00:28:12.266 clat (usec): min=1547, max=8525, avg=4193.44, stdev=1088.83 00:28:12.266 lat (usec): min=1563, max=8574, avg=4210.45, stdev=1088.34 00:28:12.266 clat percentiles (usec): 00:28:12.266 | 1.00th=[ 2507], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:28:12.266 | 30.00th=[ 3097], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4817], 00:28:12.266 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5473], 00:28:12.266 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 6849], 99.95th=[ 7177], 00:28:12.266 | 99.99th=[ 8586] 00:28:12.266 bw ( KiB/s): min=12288, max=16896, per=28.90%, avg=15340.44, stdev=1861.83, samples=9 00:28:12.266 iops : min= 1536, max= 2112, avg=1917.56, stdev=232.73, samples=9 00:28:12.266 lat (msec) : 2=0.05%, 4=35.61%, 10=64.34% 00:28:12.266 cpu : usr=90.98%, sys=8.00%, ctx=10, majf=0, minf=1075 00:28:12.266 IO depths : 1=0.1%, 2=8.6%, 4=59.1%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.266 issued rwts: total=9428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.266 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:12.266 00:28:12.266 Run status group 0 (all jobs): 00:28:12.266 READ: bw=51.8MiB/s (54.3MB/s), 12.2MiB/s-14.7MiB/s (12.8MB/s-15.4MB/s), io=259MiB (272MB), run=5001-5005msec 00:28:12.831 ----------------------------------------------------- 00:28:12.831 Suppressions used: 00:28:12.831 count bytes template 00:28:12.831 6 52 /usr/src/fio/parse.c 00:28:12.831 1 8 libtcmalloc_minimal.so 00:28:12.831 1 904 libcrypto.so 00:28:12.831 ----------------------------------------------------- 00:28:12.831 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.831 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:12.832 ************************************ 00:28:12.832 END TEST fio_dif_rand_params 00:28:12.832 ************************************ 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.832 00:28:12.832 real 0m28.131s 00:28:12.832 user 2m7.508s 00:28:12.832 sys 0m9.726s 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.832 21:27:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.090 21:27:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:13.090 21:27:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:13.090 21:27:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:13.090 21:27:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.090 21:27:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:13.090 ************************************ 00:28:13.090 START TEST fio_dif_digest 00:28:13.090 ************************************ 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:13.090 bdev_null0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:13.090 [2024-07-14 21:27:24.466757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.090 { 00:28:13.090 "params": { 00:28:13.090 "name": "Nvme$subsystem", 00:28:13.090 "trtype": "$TEST_TRANSPORT", 00:28:13.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.090 "adrfam": "ipv4", 00:28:13.090 "trsvcid": "$NVMF_PORT", 00:28:13.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.090 "hdgst": ${hdgst:-false}, 00:28:13.090 "ddgst": ${ddgst:-false} 00:28:13.090 }, 00:28:13.090 "method": "bdev_nvme_attach_controller" 00:28:13.090 } 00:28:13.090 EOF 00:28:13.090 )") 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:13.090 "params": { 00:28:13.090 "name": "Nvme0", 00:28:13.090 "trtype": "tcp", 00:28:13.090 "traddr": "10.0.0.2", 00:28:13.090 "adrfam": "ipv4", 00:28:13.090 "trsvcid": "4420", 00:28:13.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.090 "hdgst": true, 00:28:13.090 "ddgst": true 00:28:13.090 }, 00:28:13.090 "method": "bdev_nvme_attach_controller" 00:28:13.090 }' 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:13.090 21:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.349 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:13.349 ... 00:28:13.349 fio-3.35 00:28:13.349 Starting 3 threads 00:28:25.575 00:28:25.575 filename0: (groupid=0, jobs=1): err= 0: pid=89896: Sun Jul 14 21:27:35 2024 00:28:25.575 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10013msec) 00:28:25.575 slat (nsec): min=4557, max=90682, avg=19082.02, stdev=7347.68 00:28:25.575 clat (usec): min=13970, max=20653, avg=15698.90, stdev=1186.07 00:28:25.575 lat (usec): min=13984, max=20678, avg=15717.98, stdev=1187.17 00:28:25.575 clat percentiles (usec): 00:28:25.575 | 1.00th=[14091], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:28:25.575 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:28:25.575 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17433], 95.00th=[17957], 00:28:25.575 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20579], 99.95th=[20579], 00:28:25.575 | 99.99th=[20579] 00:28:25.575 bw ( KiB/s): min=21333, max=26112, per=33.31%, avg=24375.45, stdev=1570.64, samples=20 00:28:25.575 iops : min= 166, max= 204, avg=190.40, stdev=12.34, samples=20 00:28:25.575 lat (msec) : 20=99.84%, 50=0.16% 00:28:25.575 cpu : usr=91.37%, sys=7.97%, ctx=25, majf=0, minf=1075 00:28:25.575 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.575 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.575 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:25.575 filename0: (groupid=0, jobs=1): err= 0: pid=89897: Sun Jul 14 21:27:35 2024 00:28:25.575 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10012msec) 00:28:25.575 slat (nsec): min=5586, max=75719, avg=18982.70, stdev=7110.98 00:28:25.575 clat (usec): min=13973, max=19558, avg=15696.35, stdev=1177.76 00:28:25.575 lat (usec): min=13987, max=19581, avg=15715.34, stdev=1178.91 00:28:25.575 clat percentiles (usec): 00:28:25.575 | 1.00th=[14091], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:28:25.575 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:28:25.575 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17433], 95.00th=[17957], 00:28:25.575 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:28:25.575 | 99.99th=[19530] 00:28:25.575 bw ( KiB/s): min=21418, max=26112, per=33.32%, avg=24379.70, stdev=1562.07, samples=20 00:28:25.575 iops : min= 167, max= 204, avg=190.45, stdev=12.24, samples=20 00:28:25.575 lat (msec) : 20=100.00% 00:28:25.575 cpu : usr=91.92%, sys=7.42%, ctx=12, majf=0, minf=1074 00:28:25.575 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.575 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.575 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:25.575 filename0: (groupid=0, jobs=1): err= 0: pid=89898: Sun Jul 14 21:27:35 2024 00:28:25.575 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10011msec) 00:28:25.575 slat (usec): min=5, max=111, avg=14.82, stdev= 7.83 00:28:25.575 clat (usec): min=11391, max=23681, avg=15702.28, stdev=1218.86 00:28:25.575 lat (usec): min=11400, max=23701, avg=15717.10, stdev=1219.95 00:28:25.575 clat percentiles (usec): 00:28:25.575 | 1.00th=[14091], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:28:25.575 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15533], 60.00th=[15926], 00:28:25.575 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17695], 95.00th=[17957], 00:28:25.575 | 99.00th=[18744], 99.50th=[19006], 99.90th=[23725], 99.95th=[23725], 00:28:25.575 | 99.99th=[23725] 00:28:25.575 bw ( KiB/s): min=21461, max=26880, per=33.32%, avg=24381.85, stdev=1597.16, samples=20 00:28:25.575 iops : min= 167, max= 210, avg=190.45, stdev=12.54, samples=20 00:28:25.575 lat (msec) : 20=99.84%, 50=0.16% 00:28:25.575 cpu : usr=92.40%, sys=6.94%, ctx=20, majf=0, minf=1072 00:28:25.575 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.575 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.575 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:25.575 00:28:25.575 Run status group 0 (all jobs): 00:28:25.575 READ: bw=71.5MiB/s (74.9MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=716MiB (750MB), run=10011-10013msec 00:28:25.575 ----------------------------------------------------- 00:28:25.575 Suppressions used: 00:28:25.575 count bytes template 00:28:25.575 5 44 /usr/src/fio/parse.c 00:28:25.575 1 8 libtcmalloc_minimal.so 00:28:25.575 1 904 libcrypto.so 00:28:25.575 ----------------------------------------------------- 00:28:25.575 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:25.575 ************************************ 00:28:25.575 END TEST fio_dif_digest 00:28:25.575 ************************************ 00:28:25.575 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.576 00:28:25.576 real 0m12.342s 00:28:25.576 user 0m29.487s 00:28:25.576 sys 0m2.623s 00:28:25.576 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.576 21:27:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:25.576 21:27:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:25.576 21:27:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:25.576 rmmod nvme_tcp 00:28:25.576 rmmod nvme_fabrics 00:28:25.576 rmmod nvme_keyring 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 89135 ']' 00:28:25.576 21:27:36 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 89135 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 89135 ']' 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 89135 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89135 00:28:25.576 killing process with pid 89135 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89135' 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@967 -- # kill 89135 00:28:25.576 21:27:36 nvmf_dif -- common/autotest_common.sh@972 -- # wait 89135 00:28:26.966 21:27:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:26.966 21:27:38 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:26.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.966 Waiting for block devices as requested 00:28:26.966 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:27.225 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:27.225 21:27:38 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:27.225 21:27:38 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:27.225 21:27:38 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:27.225 21:27:38 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:27.225 21:27:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.225 21:27:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:27.225 21:27:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.225 21:27:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:27.225 ************************************ 00:28:27.225 END TEST nvmf_dif 00:28:27.225 ************************************ 00:28:27.225 00:28:27.225 real 1m9.404s 00:28:27.225 user 4m5.153s 00:28:27.225 sys 0m20.572s 00:28:27.225 21:27:38 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:27.225 21:27:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:27.225 21:27:38 -- common/autotest_common.sh@1142 -- # return 0 00:28:27.225 21:27:38 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:27.225 21:27:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:27.225 21:27:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.225 21:27:38 -- common/autotest_common.sh@10 -- # set +x 00:28:27.225 ************************************ 00:28:27.225 START TEST nvmf_abort_qd_sizes 00:28:27.225 ************************************ 00:28:27.225 21:27:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:27.484 * Looking for test storage... 00:28:27.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.484 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:27.485 Cannot find device "nvmf_tgt_br" 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:27.485 Cannot find device "nvmf_tgt_br2" 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:27.485 Cannot find device "nvmf_tgt_br" 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:27.485 Cannot find device "nvmf_tgt_br2" 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:27.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:27.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:27.485 21:27:38 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:27.485 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:27.485 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:27.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:28:27.743 00:28:27.743 --- 10.0.0.2 ping statistics --- 00:28:27.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.743 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:27.743 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:27.743 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:28:27.743 00:28:27.743 --- 10.0.0.3 ping statistics --- 00:28:27.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.743 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:28:27.743 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:27.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:28:27.744 00:28:27.744 --- 10.0.0.1 ping statistics --- 00:28:27.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.744 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:28:27.744 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.744 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:28:27.744 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:27.744 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:28.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:28.568 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:28.568 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.568 21:27:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=90510 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 90510 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 90510 ']' 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.568 21:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 [2024-07-14 21:27:40.090274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:28.569 [2024-07-14 21:27:40.090434] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.827 [2024-07-14 21:27:40.254426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.085 [2024-07-14 21:27:40.489158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.085 [2024-07-14 21:27:40.489513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.085 [2024-07-14 21:27:40.489696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.085 [2024-07-14 21:27:40.489970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.085 [2024-07-14 21:27:40.490173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.085 [2024-07-14 21:27:40.490553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.085 [2024-07-14 21:27:40.490709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.085 [2024-07-14 21:27:40.491187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.085 [2024-07-14 21:27:40.491192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.343 [2024-07-14 21:27:40.685810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:29.601 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.602 21:27:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:29.602 ************************************ 00:28:29.602 START TEST spdk_target_abort 00:28:29.602 ************************************ 00:28:29.602 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:29.602 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:29.602 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:29.602 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.602 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.860 spdk_targetn1 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.860 [2024-07-14 21:27:41.213383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.860 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.861 [2024-07-14 21:27:41.247361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:29.861 21:27:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.146 Initializing NVMe Controllers 00:28:33.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.146 Initialization complete. Launching workers. 00:28:33.146 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8754, failed: 0 00:28:33.146 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1030, failed to submit 7724 00:28:33.146 success 879, unsuccess 151, failed 0 00:28:33.146 21:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:33.146 21:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:37.354 Initializing NVMe Controllers 00:28:37.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:37.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:37.354 Initialization complete. Launching workers. 00:28:37.354 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8832, failed: 0 00:28:37.354 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7661 00:28:37.354 success 392, unsuccess 779, failed 0 00:28:37.354 21:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:37.354 21:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:39.889 Initializing NVMe Controllers 00:28:39.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:39.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:39.889 Initialization complete. Launching workers. 00:28:39.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27338, failed: 0 00:28:39.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2261, failed to submit 25077 00:28:39.889 success 345, unsuccess 1916, failed 0 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.889 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90510 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 90510 ']' 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 90510 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90510 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.455 killing process with pid 90510 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90510' 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 90510 00:28:40.455 21:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 90510 00:28:41.392 00:28:41.392 real 0m11.815s 00:28:41.392 user 0m45.856s 00:28:41.392 sys 0m2.362s 00:28:41.392 21:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.392 21:27:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:41.392 ************************************ 00:28:41.392 END TEST spdk_target_abort 00:28:41.392 ************************************ 00:28:41.652 21:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:41.652 21:27:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:41.652 21:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:41.652 21:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.652 21:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:41.652 ************************************ 00:28:41.652 START TEST kernel_target_abort 00:28:41.652 ************************************ 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:41.652 21:27:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:41.652 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:41.652 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:41.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:41.912 Waiting for block devices as requested 00:28:41.912 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.169 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:42.427 21:27:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:42.685 No valid GPT data, bailing 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:42.685 No valid GPT data, bailing 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:42.685 No valid GPT data, bailing 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:42.685 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:42.943 No valid GPT data, bailing 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 --hostid=e5dc810d-291e-43ba-88f4-ab46cda07291 -a 10.0.0.1 -t tcp -s 4420 00:28:42.943 00:28:42.943 Discovery Log Number of Records 2, Generation counter 2 00:28:42.943 =====Discovery Log Entry 0====== 00:28:42.943 trtype: tcp 00:28:42.943 adrfam: ipv4 00:28:42.943 subtype: current discovery subsystem 00:28:42.943 treq: not specified, sq flow control disable supported 00:28:42.943 portid: 1 00:28:42.943 trsvcid: 4420 00:28:42.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:42.943 traddr: 10.0.0.1 00:28:42.943 eflags: none 00:28:42.943 sectype: none 00:28:42.943 =====Discovery Log Entry 1====== 00:28:42.943 trtype: tcp 00:28:42.943 adrfam: ipv4 00:28:42.943 subtype: nvme subsystem 00:28:42.943 treq: not specified, sq flow control disable supported 00:28:42.943 portid: 1 00:28:42.943 trsvcid: 4420 00:28:42.943 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:42.943 traddr: 10.0.0.1 00:28:42.943 eflags: none 00:28:42.943 sectype: none 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:42.943 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:42.944 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:42.944 21:27:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.235 Initializing NVMe Controllers 00:28:46.235 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:46.235 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:46.235 Initialization complete. Launching workers. 00:28:46.235 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26998, failed: 0 00:28:46.235 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26998, failed to submit 0 00:28:46.235 success 0, unsuccess 26998, failed 0 00:28:46.235 21:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.235 21:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:49.518 Initializing NVMe Controllers 00:28:49.518 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:49.518 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:49.518 Initialization complete. Launching workers. 00:28:49.518 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55122, failed: 0 00:28:49.518 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23347, failed to submit 31775 00:28:49.518 success 0, unsuccess 23347, failed 0 00:28:49.518 21:28:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:49.518 21:28:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:52.831 Initializing NVMe Controllers 00:28:52.831 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:52.831 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:52.831 Initialization complete. Launching workers. 00:28:52.831 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61453, failed: 0 00:28:52.831 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15350, failed to submit 46103 00:28:52.831 success 0, unsuccess 15350, failed 0 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:52.831 21:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:53.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:53.966 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:53.966 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.224 00:28:54.224 real 0m12.548s 00:28:54.224 user 0m6.705s 00:28:54.224 sys 0m3.500s 00:28:54.224 21:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:54.224 21:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.224 ************************************ 00:28:54.224 END TEST kernel_target_abort 00:28:54.224 ************************************ 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:54.224 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:54.225 rmmod nvme_tcp 00:28:54.225 rmmod nvme_fabrics 00:28:54.225 rmmod nvme_keyring 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 90510 ']' 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 90510 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 90510 ']' 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 90510 00:28:54.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (90510) - No such process 00:28:54.225 Process with pid 90510 is not found 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 90510 is not found' 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:54.225 21:28:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:54.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:54.484 Waiting for block devices as requested 00:28:54.743 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:54.743 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:54.743 00:28:54.743 real 0m27.510s 00:28:54.743 user 0m53.743s 00:28:54.743 sys 0m7.124s 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:54.743 21:28:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:54.743 ************************************ 00:28:54.743 END TEST nvmf_abort_qd_sizes 00:28:54.743 ************************************ 00:28:54.743 21:28:06 -- common/autotest_common.sh@1142 -- # return 0 00:28:54.743 21:28:06 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:54.743 21:28:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:54.743 21:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.743 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:28:54.743 ************************************ 00:28:54.743 START TEST keyring_file 00:28:54.743 ************************************ 00:28:54.743 21:28:06 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:55.002 * Looking for test storage... 00:28:55.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.002 21:28:06 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.002 21:28:06 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.002 21:28:06 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.002 21:28:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.002 21:28:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.002 21:28:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.002 21:28:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:55.002 21:28:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.E2gFUTNPBX 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.E2gFUTNPBX 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.E2gFUTNPBX 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.E2gFUTNPBX 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.whL8SJ42gW 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:55.002 21:28:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.whL8SJ42gW 00:28:55.002 21:28:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.whL8SJ42gW 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.whL8SJ42gW 00:28:55.002 21:28:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=91515 00:28:55.003 21:28:06 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.003 21:28:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91515 00:28:55.003 21:28:06 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91515 ']' 00:28:55.003 21:28:06 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.003 21:28:06 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.003 21:28:06 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.003 21:28:06 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.003 21:28:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:55.262 [2024-07-14 21:28:06.657738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:55.262 [2024-07-14 21:28:06.657967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91515 ] 00:28:55.520 [2024-07-14 21:28:06.838226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.778 [2024-07-14 21:28:07.091438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.778 [2024-07-14 21:28:07.290420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:56.345 21:28:07 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.345 21:28:07 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:56.345 21:28:07 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:56.345 21:28:07 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.345 21:28:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:56.345 [2024-07-14 21:28:07.878936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.604 null0 00:28:56.604 [2024-07-14 21:28:07.910825] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:56.604 [2024-07-14 21:28:07.911224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:56.604 [2024-07-14 21:28:07.918854] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.604 21:28:07 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:56.604 [2024-07-14 21:28:07.930837] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:56.604 request: 00:28:56.604 { 00:28:56.604 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.604 "secure_channel": false, 00:28:56.604 "listen_address": { 00:28:56.604 "trtype": "tcp", 00:28:56.604 "traddr": "127.0.0.1", 00:28:56.604 "trsvcid": "4420" 00:28:56.604 }, 00:28:56.604 "method": "nvmf_subsystem_add_listener", 00:28:56.604 "req_id": 1 00:28:56.604 } 00:28:56.604 Got JSON-RPC error response 00:28:56.604 response: 00:28:56.604 { 00:28:56.604 "code": -32602, 00:28:56.604 "message": "Invalid parameters" 00:28:56.604 } 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:56.604 21:28:07 keyring_file -- keyring/file.sh@46 -- # bperfpid=91532 00:28:56.604 21:28:07 keyring_file -- keyring/file.sh@48 -- # waitforlisten 91532 /var/tmp/bperf.sock 00:28:56.604 21:28:07 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91532 ']' 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.604 21:28:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:56.604 [2024-07-14 21:28:08.032843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:56.604 [2024-07-14 21:28:08.033058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91532 ] 00:28:56.862 [2024-07-14 21:28:08.201797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.121 [2024-07-14 21:28:08.458729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.121 [2024-07-14 21:28:08.623857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:57.690 21:28:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.690 21:28:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:57.690 21:28:09 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:28:57.690 21:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:28:57.949 21:28:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.whL8SJ42gW 00:28:57.949 21:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.whL8SJ42gW 00:28:58.208 21:28:09 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:58.208 21:28:09 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:58.208 21:28:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.208 21:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.208 21:28:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.466 21:28:09 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.E2gFUTNPBX == \/\t\m\p\/\t\m\p\.\E\2\g\F\U\T\N\P\B\X ]] 00:28:58.466 21:28:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:58.466 21:28:09 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:58.466 21:28:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:58.466 21:28:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.466 21:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.726 21:28:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.whL8SJ42gW == \/\t\m\p\/\t\m\p\.\w\h\L\8\S\J\4\2\g\W ]] 00:28:58.726 21:28:10 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:58.726 21:28:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:58.726 21:28:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:58.726 21:28:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.726 21:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.726 21:28:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.985 21:28:10 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:58.985 21:28:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:58.985 21:28:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:58.985 21:28:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:58.985 21:28:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.985 21:28:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:58.985 21:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.244 21:28:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:59.244 21:28:10 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.244 21:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.244 [2024-07-14 21:28:10.772588] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:59.503 nvme0n1 00:28:59.503 21:28:10 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:59.503 21:28:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.503 21:28:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.503 21:28:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.503 21:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.503 21:28:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.763 21:28:11 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:59.763 21:28:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:59.763 21:28:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.764 21:28:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:59.764 21:28:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.764 21:28:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.764 21:28:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:00.022 21:28:11 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:00.022 21:28:11 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:00.022 Running I/O for 1 seconds... 00:29:00.955 00:29:00.955 Latency(us) 00:29:00.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.955 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:00.955 nvme0n1 : 1.01 8469.27 33.08 0.00 0.00 15036.34 5600.35 24665.37 00:29:00.955 =================================================================================================================== 00:29:00.955 Total : 8469.27 33.08 0.00 0.00 15036.34 5600.35 24665.37 00:29:00.955 0 00:29:00.955 21:28:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:00.955 21:28:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:01.213 21:28:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:01.213 21:28:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.213 21:28:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:01.213 21:28:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.213 21:28:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:01.213 21:28:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.472 21:28:12 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:01.472 21:28:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:01.472 21:28:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.472 21:28:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:01.472 21:28:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.472 21:28:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.472 21:28:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:01.731 21:28:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:01.731 21:28:13 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.731 21:28:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:01.731 21:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:01.990 [2024-07-14 21:28:13.450803] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:01.990 [2024-07-14 21:28:13.451598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (107): Transport endpoint is not connected 00:29:01.990 [2024-07-14 21:28:13.452573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:29:01.990 [2024-07-14 21:28:13.453568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:01.990 [2024-07-14 21:28:13.453599] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:01.990 [2024-07-14 21:28:13.453613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:01.990 request: 00:29:01.990 { 00:29:01.990 "name": "nvme0", 00:29:01.990 "trtype": "tcp", 00:29:01.990 "traddr": "127.0.0.1", 00:29:01.990 "adrfam": "ipv4", 00:29:01.990 "trsvcid": "4420", 00:29:01.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.990 "prchk_reftag": false, 00:29:01.990 "prchk_guard": false, 00:29:01.990 "hdgst": false, 00:29:01.990 "ddgst": false, 00:29:01.990 "psk": "key1", 00:29:01.990 "method": "bdev_nvme_attach_controller", 00:29:01.990 "req_id": 1 00:29:01.990 } 00:29:01.990 Got JSON-RPC error response 00:29:01.990 response: 00:29:01.990 { 00:29:01.990 "code": -5, 00:29:01.990 "message": "Input/output error" 00:29:01.990 } 00:29:01.990 21:28:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:01.990 21:28:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.990 21:28:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.990 21:28:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.990 21:28:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:01.990 21:28:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:01.990 21:28:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.990 21:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.990 21:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.990 21:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.249 21:28:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:02.249 21:28:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:02.249 21:28:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:02.249 21:28:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.249 21:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.249 21:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.249 21:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.508 21:28:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:02.508 21:28:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:02.508 21:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:02.767 21:28:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:02.767 21:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:03.026 21:28:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:03.026 21:28:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:03.026 21:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.284 21:28:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:03.284 21:28:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.E2gFUTNPBX 00:29:03.284 21:28:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:03.284 21:28:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:29:03.284 21:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:29:03.543 [2024-07-14 21:28:14.909713] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.E2gFUTNPBX': 0100660 00:29:03.543 [2024-07-14 21:28:14.909871] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:03.543 request: 00:29:03.543 { 00:29:03.543 "name": "key0", 00:29:03.543 "path": "/tmp/tmp.E2gFUTNPBX", 00:29:03.543 "method": "keyring_file_add_key", 00:29:03.543 "req_id": 1 00:29:03.543 } 00:29:03.543 Got JSON-RPC error response 00:29:03.543 response: 00:29:03.543 { 00:29:03.543 "code": -1, 00:29:03.543 "message": "Operation not permitted" 00:29:03.543 } 00:29:03.543 21:28:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:03.543 21:28:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:03.543 21:28:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:03.543 21:28:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:03.543 21:28:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.E2gFUTNPBX 00:29:03.543 21:28:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:29:03.543 21:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E2gFUTNPBX 00:29:03.801 21:28:15 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.E2gFUTNPBX 00:29:03.801 21:28:15 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:03.801 21:28:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.801 21:28:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.801 21:28:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.801 21:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.801 21:28:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.060 21:28:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:04.060 21:28:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.060 21:28:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.060 21:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.321 [2024-07-14 21:28:15.738190] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.E2gFUTNPBX': No such file or directory 00:29:04.321 [2024-07-14 21:28:15.738256] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:04.321 [2024-07-14 21:28:15.738306] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:04.321 [2024-07-14 21:28:15.738319] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.321 [2024-07-14 21:28:15.738333] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:04.321 request: 00:29:04.321 { 00:29:04.321 "name": "nvme0", 00:29:04.321 "trtype": "tcp", 00:29:04.321 "traddr": "127.0.0.1", 00:29:04.321 "adrfam": "ipv4", 00:29:04.321 "trsvcid": "4420", 00:29:04.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.321 "prchk_reftag": false, 00:29:04.321 "prchk_guard": false, 00:29:04.321 "hdgst": false, 00:29:04.321 "ddgst": false, 00:29:04.321 "psk": "key0", 00:29:04.321 "method": "bdev_nvme_attach_controller", 00:29:04.321 "req_id": 1 00:29:04.321 } 00:29:04.321 Got JSON-RPC error response 00:29:04.321 response: 00:29:04.321 { 00:29:04.321 "code": -19, 00:29:04.321 "message": "No such device" 00:29:04.321 } 00:29:04.321 21:28:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:04.321 21:28:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:04.321 21:28:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:04.321 21:28:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:04.321 21:28:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:04.322 21:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:04.608 21:28:16 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S1Nloz6OOw 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:04.608 21:28:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:04.608 21:28:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:04.608 21:28:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:04.608 21:28:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:04.608 21:28:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:04.608 21:28:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S1Nloz6OOw 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S1Nloz6OOw 00:29:04.608 21:28:16 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.S1Nloz6OOw 00:29:04.608 21:28:16 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S1Nloz6OOw 00:29:04.608 21:28:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S1Nloz6OOw 00:29:04.867 21:28:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.867 21:28:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.126 nvme0n1 00:29:05.126 21:28:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:05.126 21:28:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:05.126 21:28:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.126 21:28:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.126 21:28:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.126 21:28:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.693 21:28:16 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:05.693 21:28:16 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:05.693 21:28:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:05.693 21:28:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:05.693 21:28:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:05.693 21:28:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.693 21:28:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.693 21:28:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.959 21:28:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:05.959 21:28:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:05.959 21:28:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.959 21:28:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:05.959 21:28:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.959 21:28:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.959 21:28:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.216 21:28:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:06.216 21:28:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:06.216 21:28:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:06.474 21:28:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:06.474 21:28:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.474 21:28:17 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:06.732 21:28:18 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:06.732 21:28:18 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S1Nloz6OOw 00:29:06.732 21:28:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S1Nloz6OOw 00:29:06.991 21:28:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.whL8SJ42gW 00:29:06.991 21:28:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.whL8SJ42gW 00:29:07.251 21:28:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.251 21:28:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.818 nvme0n1 00:29:07.818 21:28:19 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:07.818 21:28:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:08.078 21:28:19 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:08.078 "subsystems": [ 00:29:08.078 { 00:29:08.078 "subsystem": "keyring", 00:29:08.078 "config": [ 00:29:08.078 { 00:29:08.078 "method": "keyring_file_add_key", 00:29:08.078 "params": { 00:29:08.078 "name": "key0", 00:29:08.078 "path": "/tmp/tmp.S1Nloz6OOw" 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "keyring_file_add_key", 00:29:08.078 "params": { 00:29:08.078 "name": "key1", 00:29:08.078 "path": "/tmp/tmp.whL8SJ42gW" 00:29:08.078 } 00:29:08.078 } 00:29:08.078 ] 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "subsystem": "iobuf", 00:29:08.078 "config": [ 00:29:08.078 { 00:29:08.078 "method": "iobuf_set_options", 00:29:08.078 "params": { 00:29:08.078 "small_pool_count": 8192, 00:29:08.078 "large_pool_count": 1024, 00:29:08.078 "small_bufsize": 8192, 00:29:08.078 "large_bufsize": 135168 00:29:08.078 } 00:29:08.078 } 00:29:08.078 ] 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "subsystem": "sock", 00:29:08.078 "config": [ 00:29:08.078 { 00:29:08.078 "method": "sock_set_default_impl", 00:29:08.078 "params": { 00:29:08.078 "impl_name": "uring" 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "sock_impl_set_options", 00:29:08.078 "params": { 00:29:08.078 "impl_name": "ssl", 00:29:08.078 "recv_buf_size": 4096, 00:29:08.078 "send_buf_size": 4096, 00:29:08.078 "enable_recv_pipe": true, 00:29:08.078 "enable_quickack": false, 00:29:08.078 "enable_placement_id": 0, 00:29:08.078 "enable_zerocopy_send_server": true, 00:29:08.078 "enable_zerocopy_send_client": false, 00:29:08.078 "zerocopy_threshold": 0, 00:29:08.078 "tls_version": 0, 00:29:08.078 "enable_ktls": false 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "sock_impl_set_options", 00:29:08.078 "params": { 00:29:08.078 "impl_name": "posix", 00:29:08.078 "recv_buf_size": 2097152, 00:29:08.078 "send_buf_size": 2097152, 00:29:08.078 "enable_recv_pipe": true, 00:29:08.078 "enable_quickack": false, 00:29:08.078 "enable_placement_id": 0, 00:29:08.078 "enable_zerocopy_send_server": true, 00:29:08.078 "enable_zerocopy_send_client": false, 00:29:08.078 "zerocopy_threshold": 0, 00:29:08.078 "tls_version": 0, 00:29:08.078 "enable_ktls": false 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "sock_impl_set_options", 00:29:08.078 "params": { 00:29:08.078 "impl_name": "uring", 00:29:08.078 "recv_buf_size": 2097152, 00:29:08.078 "send_buf_size": 2097152, 00:29:08.078 "enable_recv_pipe": true, 00:29:08.078 "enable_quickack": false, 00:29:08.078 "enable_placement_id": 0, 00:29:08.078 "enable_zerocopy_send_server": false, 00:29:08.078 "enable_zerocopy_send_client": false, 00:29:08.078 "zerocopy_threshold": 0, 00:29:08.078 "tls_version": 0, 00:29:08.078 "enable_ktls": false 00:29:08.078 } 00:29:08.078 } 00:29:08.078 ] 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "subsystem": "vmd", 00:29:08.078 "config": [] 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "subsystem": "accel", 00:29:08.078 "config": [ 00:29:08.078 { 00:29:08.078 "method": "accel_set_options", 00:29:08.078 "params": { 00:29:08.078 "small_cache_size": 128, 00:29:08.078 "large_cache_size": 16, 00:29:08.078 "task_count": 2048, 00:29:08.078 "sequence_count": 2048, 00:29:08.078 "buf_count": 2048 00:29:08.078 } 00:29:08.078 } 00:29:08.078 ] 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "subsystem": "bdev", 00:29:08.078 "config": [ 00:29:08.078 { 00:29:08.078 "method": "bdev_set_options", 00:29:08.078 "params": { 00:29:08.078 "bdev_io_pool_size": 65535, 00:29:08.078 "bdev_io_cache_size": 256, 00:29:08.078 "bdev_auto_examine": true, 00:29:08.078 "iobuf_small_cache_size": 128, 00:29:08.078 "iobuf_large_cache_size": 16 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "bdev_raid_set_options", 00:29:08.078 "params": { 00:29:08.078 "process_window_size_kb": 1024 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "bdev_iscsi_set_options", 00:29:08.078 "params": { 00:29:08.078 "timeout_sec": 30 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "bdev_nvme_set_options", 00:29:08.078 "params": { 00:29:08.078 "action_on_timeout": "none", 00:29:08.078 "timeout_us": 0, 00:29:08.078 "timeout_admin_us": 0, 00:29:08.078 "keep_alive_timeout_ms": 10000, 00:29:08.078 "arbitration_burst": 0, 00:29:08.078 "low_priority_weight": 0, 00:29:08.078 "medium_priority_weight": 0, 00:29:08.078 "high_priority_weight": 0, 00:29:08.078 "nvme_adminq_poll_period_us": 10000, 00:29:08.078 "nvme_ioq_poll_period_us": 0, 00:29:08.078 "io_queue_requests": 512, 00:29:08.078 "delay_cmd_submit": true, 00:29:08.078 "transport_retry_count": 4, 00:29:08.078 "bdev_retry_count": 3, 00:29:08.078 "transport_ack_timeout": 0, 00:29:08.078 "ctrlr_loss_timeout_sec": 0, 00:29:08.078 "reconnect_delay_sec": 0, 00:29:08.078 "fast_io_fail_timeout_sec": 0, 00:29:08.078 "disable_auto_failback": false, 00:29:08.078 "generate_uuids": false, 00:29:08.078 "transport_tos": 0, 00:29:08.078 "nvme_error_stat": false, 00:29:08.078 "rdma_srq_size": 0, 00:29:08.078 "io_path_stat": false, 00:29:08.078 "allow_accel_sequence": false, 00:29:08.078 "rdma_max_cq_size": 0, 00:29:08.078 "rdma_cm_event_timeout_ms": 0, 00:29:08.078 "dhchap_digests": [ 00:29:08.078 "sha256", 00:29:08.078 "sha384", 00:29:08.078 "sha512" 00:29:08.078 ], 00:29:08.078 "dhchap_dhgroups": [ 00:29:08.078 "null", 00:29:08.078 "ffdhe2048", 00:29:08.078 "ffdhe3072", 00:29:08.078 "ffdhe4096", 00:29:08.078 "ffdhe6144", 00:29:08.078 "ffdhe8192" 00:29:08.078 ] 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "bdev_nvme_attach_controller", 00:29:08.078 "params": { 00:29:08.078 "name": "nvme0", 00:29:08.078 "trtype": "TCP", 00:29:08.078 "adrfam": "IPv4", 00:29:08.078 "traddr": "127.0.0.1", 00:29:08.078 "trsvcid": "4420", 00:29:08.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.078 "prchk_reftag": false, 00:29:08.078 "prchk_guard": false, 00:29:08.078 "ctrlr_loss_timeout_sec": 0, 00:29:08.078 "reconnect_delay_sec": 0, 00:29:08.078 "fast_io_fail_timeout_sec": 0, 00:29:08.078 "psk": "key0", 00:29:08.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.078 "hdgst": false, 00:29:08.078 "ddgst": false 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "bdev_nvme_set_hotplug", 00:29:08.078 "params": { 00:29:08.078 "period_us": 100000, 00:29:08.078 "enable": false 00:29:08.078 } 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "method": "bdev_wait_for_examine" 00:29:08.078 } 00:29:08.078 ] 00:29:08.078 }, 00:29:08.078 { 00:29:08.078 "subsystem": "nbd", 00:29:08.078 "config": [] 00:29:08.078 } 00:29:08.079 ] 00:29:08.079 }' 00:29:08.079 21:28:19 keyring_file -- keyring/file.sh@114 -- # killprocess 91532 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91532 ']' 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91532 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91532 00:29:08.079 killing process with pid 91532 00:29:08.079 Received shutdown signal, test time was about 1.000000 seconds 00:29:08.079 00:29:08.079 Latency(us) 00:29:08.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.079 =================================================================================================================== 00:29:08.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91532' 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@967 -- # kill 91532 00:29:08.079 21:28:19 keyring_file -- common/autotest_common.sh@972 -- # wait 91532 00:29:09.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.014 21:28:20 keyring_file -- keyring/file.sh@117 -- # bperfpid=91787 00:29:09.014 21:28:20 keyring_file -- keyring/file.sh@119 -- # waitforlisten 91787 /var/tmp/bperf.sock 00:29:09.014 21:28:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91787 ']' 00:29:09.014 21:28:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.014 21:28:20 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:09.014 21:28:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.015 21:28:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.015 21:28:20 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:09.015 "subsystems": [ 00:29:09.015 { 00:29:09.015 "subsystem": "keyring", 00:29:09.015 "config": [ 00:29:09.015 { 00:29:09.015 "method": "keyring_file_add_key", 00:29:09.015 "params": { 00:29:09.015 "name": "key0", 00:29:09.015 "path": "/tmp/tmp.S1Nloz6OOw" 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "keyring_file_add_key", 00:29:09.015 "params": { 00:29:09.015 "name": "key1", 00:29:09.015 "path": "/tmp/tmp.whL8SJ42gW" 00:29:09.015 } 00:29:09.015 } 00:29:09.015 ] 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "subsystem": "iobuf", 00:29:09.015 "config": [ 00:29:09.015 { 00:29:09.015 "method": "iobuf_set_options", 00:29:09.015 "params": { 00:29:09.015 "small_pool_count": 8192, 00:29:09.015 "large_pool_count": 1024, 00:29:09.015 "small_bufsize": 8192, 00:29:09.015 "large_bufsize": 135168 00:29:09.015 } 00:29:09.015 } 00:29:09.015 ] 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "subsystem": "sock", 00:29:09.015 "config": [ 00:29:09.015 { 00:29:09.015 "method": "sock_set_default_impl", 00:29:09.015 "params": { 00:29:09.015 "impl_name": "uring" 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "sock_impl_set_options", 00:29:09.015 "params": { 00:29:09.015 "impl_name": "ssl", 00:29:09.015 "recv_buf_size": 4096, 00:29:09.015 "send_buf_size": 4096, 00:29:09.015 "enable_recv_pipe": true, 00:29:09.015 "enable_quickack": false, 00:29:09.015 "enable_placement_id": 0, 00:29:09.015 "enable_zerocopy_send_server": true, 00:29:09.015 "enable_zerocopy_send_client": false, 00:29:09.015 "zerocopy_threshold": 0, 00:29:09.015 "tls_version": 0, 00:29:09.015 "enable_ktls": false 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "sock_impl_set_options", 00:29:09.015 "params": { 00:29:09.015 "impl_name": "posix", 00:29:09.015 "recv_buf_size": 2097152, 00:29:09.015 "send_buf_size": 2097152, 00:29:09.015 "enable_recv_pipe": true, 00:29:09.015 "enable_quickack": false, 00:29:09.015 "enable_placement_id": 0, 00:29:09.015 "enable_zerocopy_send_server": true, 00:29:09.015 "enable_zerocopy_send_client": false, 00:29:09.015 "zerocopy_threshold": 0, 00:29:09.015 "tls_version": 0, 00:29:09.015 "enable_ktls": false 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "sock_impl_set_options", 00:29:09.015 "params": { 00:29:09.015 "impl_name": "uring", 00:29:09.015 "recv_buf_size": 2097152, 00:29:09.015 "send_buf_size": 2097152, 00:29:09.015 "enable_recv_pipe": true, 00:29:09.015 "enable_quickack": false, 00:29:09.015 "enable_placement_id": 0, 00:29:09.015 "enable_zerocopy_send_server": false, 00:29:09.015 "enable_zerocopy_send_client": false, 00:29:09.015 "zerocopy_threshold": 0, 00:29:09.015 "tls_version": 0, 00:29:09.015 "enable_ktls": false 00:29:09.015 } 00:29:09.015 } 00:29:09.015 ] 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "subsystem": "vmd", 00:29:09.015 "config": [] 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "subsystem": "accel", 00:29:09.015 "config": [ 00:29:09.015 { 00:29:09.015 "method": "accel_set_options", 00:29:09.015 "params": { 00:29:09.015 "small_cache_size": 128, 00:29:09.015 "large_cache_size": 16, 00:29:09.015 "task_count": 2048, 00:29:09.015 "sequence_count": 2048, 00:29:09.015 "buf_count": 2048 00:29:09.015 } 00:29:09.015 } 00:29:09.015 ] 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "subsystem": "bdev", 00:29:09.015 "config": [ 00:29:09.015 { 00:29:09.015 "method": "bdev_set_options", 00:29:09.015 "params": { 00:29:09.015 "bdev_io_pool_size": 65535, 00:29:09.015 "bdev_io_cache_size": 256, 00:29:09.015 "bdev_auto_examine": true, 00:29:09.015 "iobuf_small_cache_size": 128, 00:29:09.015 "iobuf_large_cache_size": 16 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "bdev_raid_set_options", 00:29:09.015 "params": { 00:29:09.015 "process_window_size_kb": 1024 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "bdev_iscsi_set_options", 00:29:09.015 "params": { 00:29:09.015 "timeout_sec": 30 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "bdev_nvme_set_options", 00:29:09.015 "params": { 00:29:09.015 "action_on_timeout": "none", 00:29:09.015 "timeout_us": 0, 00:29:09.015 "timeout_admin_us": 0, 00:29:09.015 "keep_alive_timeout_ms": 10000, 00:29:09.015 "arbitration_burst": 0, 00:29:09.015 "low_priority_weight": 0, 00:29:09.015 "medium_priority_weight": 0, 00:29:09.015 "high_priority_weight": 0, 00:29:09.015 "nvme_adminq_poll_period_us": 10000, 00:29:09.015 "nvme_ioq_poll_period_us": 0, 00:29:09.015 "io_queue_requests": 512, 00:29:09.015 "delay_cmd_submit": true, 00:29:09.015 "transport_retry_count": 4, 00:29:09.015 "bdev_retry_count": 3, 00:29:09.015 "transport_ack_timeout": 0, 00:29:09.015 "ctrlr_loss_timeout_sec": 0, 00:29:09.015 "reconnect_delay_sec": 0, 00:29:09.015 "fast_io_fail_timeout_sec": 0, 00:29:09.015 "disable_auto_failback": false, 00:29:09.015 "generate_uuids": false, 00:29:09.015 "transport_tos": 0, 00:29:09.015 "nvme_error_stat": false, 00:29:09.015 "rdma_srq_size": 0, 00:29:09.015 "io_path_stat": false, 00:29:09.015 "allow_accel_sequence": false, 00:29:09.015 "rdma_max_cq_size": 0, 00:29:09.015 "rdma_cm_event_timeout_ms": 0, 00:29:09.015 "dhchap_digests": [ 00:29:09.015 "sha256", 00:29:09.015 "sha384", 00:29:09.015 "sha512" 00:29:09.015 ], 00:29:09.015 "dhchap_dhgroups": [ 00:29:09.015 "null", 00:29:09.015 "ffdhe2048", 00:29:09.015 "ffdhe3072", 00:29:09.015 "ffdhe4096", 00:29:09.015 "ffdhe6144", 00:29:09.015 "ffdhe8192" 00:29:09.015 ] 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "bdev_nvme_attach_controller", 00:29:09.015 "params": { 00:29:09.015 "name": "nvme0", 00:29:09.015 "trtype": "TCP", 00:29:09.015 "adrfam": "IPv4", 00:29:09.015 "traddr": "127.0.0.1", 00:29:09.015 "trsvcid": "4420", 00:29:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.015 "prchk_reftag": false, 00:29:09.015 "prchk_guard": false, 00:29:09.015 "ctrlr_loss_timeout_sec": 0, 00:29:09.015 "reconnect_delay_sec": 0, 00:29:09.015 "fast_io_fail_timeout_sec": 0, 00:29:09.015 "psk": "key0", 00:29:09.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.015 "hdgst": false, 00:29:09.015 "ddgst": false 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "bdev_nvme_set_hotplug", 00:29:09.015 "params": { 00:29:09.015 "period_us": 100000, 00:29:09.015 "enable": false 00:29:09.015 } 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "method": "bdev_wait_for_examine" 00:29:09.015 } 00:29:09.015 ] 00:29:09.015 }, 00:29:09.015 { 00:29:09.015 "subsystem": "nbd", 00:29:09.015 "config": [] 00:29:09.015 } 00:29:09.015 ] 00:29:09.015 }' 00:29:09.015 21:28:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.015 21:28:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:09.015 [2024-07-14 21:28:20.523353] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:09.015 [2024-07-14 21:28:20.523733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91787 ] 00:29:09.273 [2024-07-14 21:28:20.697469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.531 [2024-07-14 21:28:20.885961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.790 [2024-07-14 21:28:21.151748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:09.790 [2024-07-14 21:28:21.268327] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:10.049 21:28:21 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.049 21:28:21 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:10.049 21:28:21 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:10.049 21:28:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.049 21:28:21 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:10.308 21:28:21 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:10.308 21:28:21 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:10.308 21:28:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:10.308 21:28:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.308 21:28:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.308 21:28:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.308 21:28:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.566 21:28:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:10.566 21:28:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:10.566 21:28:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:10.566 21:28:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.566 21:28:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:10.566 21:28:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.566 21:28:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.823 21:28:22 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:10.823 21:28:22 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:10.824 21:28:22 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:10.824 21:28:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:11.082 21:28:22 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:11.082 21:28:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:11.082 21:28:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S1Nloz6OOw /tmp/tmp.whL8SJ42gW 00:29:11.082 21:28:22 keyring_file -- keyring/file.sh@20 -- # killprocess 91787 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91787 ']' 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91787 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91787 00:29:11.082 killing process with pid 91787 00:29:11.082 Received shutdown signal, test time was about 1.000000 seconds 00:29:11.082 00:29:11.082 Latency(us) 00:29:11.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.082 =================================================================================================================== 00:29:11.082 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91787' 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@967 -- # kill 91787 00:29:11.082 21:28:22 keyring_file -- common/autotest_common.sh@972 -- # wait 91787 00:29:12.477 21:28:23 keyring_file -- keyring/file.sh@21 -- # killprocess 91515 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91515 ']' 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91515 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:12.477 killing process with pid 91515 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91515 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91515' 00:29:12.477 21:28:23 keyring_file -- common/autotest_common.sh@967 -- # kill 91515 00:29:12.478 [2024-07-14 21:28:23.774587] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:12.478 21:28:23 keyring_file -- common/autotest_common.sh@972 -- # wait 91515 00:29:14.380 00:29:14.380 real 0m19.466s 00:29:14.380 user 0m44.832s 00:29:14.380 sys 0m3.157s 00:29:14.380 21:28:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.380 ************************************ 00:29:14.380 END TEST keyring_file 00:29:14.380 ************************************ 00:29:14.380 21:28:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:14.380 21:28:25 -- common/autotest_common.sh@1142 -- # return 0 00:29:14.380 21:28:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:14.380 21:28:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:14.380 21:28:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:14.380 21:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.380 21:28:25 -- common/autotest_common.sh@10 -- # set +x 00:29:14.380 ************************************ 00:29:14.380 START TEST keyring_linux 00:29:14.380 ************************************ 00:29:14.380 21:28:25 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:14.380 * Looking for test storage... 00:29:14.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5dc810d-291e-43ba-88f4-ab46cda07291 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e5dc810d-291e-43ba-88f4-ab46cda07291 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:14.380 21:28:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.380 21:28:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.380 21:28:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.380 21:28:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.380 21:28:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.380 21:28:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.380 21:28:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:14.380 21:28:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:14.380 21:28:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:14.380 21:28:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:14.380 21:28:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:14.639 /tmp/:spdk-test:key0 00:29:14.639 21:28:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:14.639 21:28:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:14.639 21:28:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:14.639 21:28:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:14.639 21:28:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:14.639 21:28:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:14.639 21:28:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:14.639 21:28:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:14.639 21:28:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:14.639 21:28:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:14.639 /tmp/:spdk-test:key1 00:29:14.639 21:28:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=91933 00:29:14.639 21:28:26 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:14.639 21:28:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 91933 00:29:14.639 21:28:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 91933 ']' 00:29:14.639 21:28:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.639 21:28:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.639 21:28:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.639 21:28:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.639 21:28:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:14.639 [2024-07-14 21:28:26.133661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:14.639 [2024-07-14 21:28:26.133903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91933 ] 00:29:14.899 [2024-07-14 21:28:26.298706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.158 [2024-07-14 21:28:26.448724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.158 [2024-07-14 21:28:26.597299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:15.728 21:28:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:15.728 [2024-07-14 21:28:27.075829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.728 null0 00:29:15.728 [2024-07-14 21:28:27.107727] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:15.728 [2024-07-14 21:28:27.108079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.728 21:28:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:15.728 19344984 00:29:15.728 21:28:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:15.728 339097933 00:29:15.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.728 21:28:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=91947 00:29:15.728 21:28:27 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:15.728 21:28:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 91947 /var/tmp/bperf.sock 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 91947 ']' 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.728 21:28:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:15.728 [2024-07-14 21:28:27.239196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:15.728 [2024-07-14 21:28:27.239736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91947 ] 00:29:15.986 [2024-07-14 21:28:27.413130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.245 [2024-07-14 21:28:27.636992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.811 21:28:28 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.811 21:28:28 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:16.811 21:28:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:16.811 21:28:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:17.070 21:28:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:17.070 21:28:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:17.329 [2024-07-14 21:28:28.783325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:17.587 21:28:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:17.587 21:28:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:17.845 [2024-07-14 21:28:29.160325] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:17.845 nvme0n1 00:29:17.845 21:28:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:17.845 21:28:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:17.845 21:28:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:17.845 21:28:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:17.845 21:28:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:17.845 21:28:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.103 21:28:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:18.103 21:28:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:18.103 21:28:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:18.103 21:28:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:18.103 21:28:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.103 21:28:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.103 21:28:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@25 -- # sn=19344984 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 19344984 == \1\9\3\4\4\9\8\4 ]] 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 19344984 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:18.361 21:28:29 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.619 Running I/O for 1 seconds... 00:29:19.556 00:29:19.556 Latency(us) 00:29:19.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.556 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:19.556 nvme0n1 : 1.01 8780.44 34.30 0.00 0.00 14466.87 6672.76 21209.83 00:29:19.556 =================================================================================================================== 00:29:19.556 Total : 8780.44 34.30 0.00 0.00 14466.87 6672.76 21209.83 00:29:19.556 0 00:29:19.556 21:28:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:19.556 21:28:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:19.815 21:28:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:19.815 21:28:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:19.815 21:28:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:19.815 21:28:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:19.815 21:28:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:19.815 21:28:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:20.383 21:28:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:20.383 [2024-07-14 21:28:31.875413] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:20.383 [2024-07-14 21:28:31.875586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:29:20.383 [2024-07-14 21:28:31.876556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:29:20.383 [2024-07-14 21:28:31.877552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:20.383 [2024-07-14 21:28:31.877585] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:20.383 [2024-07-14 21:28:31.877601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:20.383 request: 00:29:20.383 { 00:29:20.383 "name": "nvme0", 00:29:20.383 "trtype": "tcp", 00:29:20.383 "traddr": "127.0.0.1", 00:29:20.383 "adrfam": "ipv4", 00:29:20.383 "trsvcid": "4420", 00:29:20.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.383 "prchk_reftag": false, 00:29:20.383 "prchk_guard": false, 00:29:20.383 "hdgst": false, 00:29:20.383 "ddgst": false, 00:29:20.383 "psk": ":spdk-test:key1", 00:29:20.383 "method": "bdev_nvme_attach_controller", 00:29:20.383 "req_id": 1 00:29:20.383 } 00:29:20.383 Got JSON-RPC error response 00:29:20.383 response: 00:29:20.383 { 00:29:20.383 "code": -5, 00:29:20.383 "message": "Input/output error" 00:29:20.383 } 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@33 -- # sn=19344984 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 19344984 00:29:20.383 1 links removed 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@33 -- # sn=339097933 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 339097933 00:29:20.383 1 links removed 00:29:20.383 21:28:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 91947 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 91947 ']' 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 91947 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.383 21:28:31 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91947 00:29:20.643 killing process with pid 91947 00:29:20.643 21:28:31 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:20.643 21:28:31 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:20.643 21:28:31 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91947' 00:29:20.643 21:28:31 keyring_linux -- common/autotest_common.sh@967 -- # kill 91947 00:29:20.643 Received shutdown signal, test time was about 1.000000 seconds 00:29:20.643 00:29:20.643 Latency(us) 00:29:20.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.643 =================================================================================================================== 00:29:20.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.643 21:28:31 keyring_linux -- common/autotest_common.sh@972 -- # wait 91947 00:29:21.604 21:28:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 91933 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 91933 ']' 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 91933 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91933 00:29:21.604 killing process with pid 91933 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91933' 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@967 -- # kill 91933 00:29:21.604 21:28:32 keyring_linux -- common/autotest_common.sh@972 -- # wait 91933 00:29:24.135 00:29:24.135 real 0m9.251s 00:29:24.135 user 0m16.518s 00:29:24.135 sys 0m1.588s 00:29:24.135 21:28:35 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.135 21:28:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:24.135 ************************************ 00:29:24.135 END TEST keyring_linux 00:29:24.135 ************************************ 00:29:24.135 21:28:35 -- common/autotest_common.sh@1142 -- # return 0 00:29:24.135 21:28:35 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:24.135 21:28:35 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:24.135 21:28:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:24.135 21:28:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:24.135 21:28:35 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:24.135 21:28:35 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:24.135 21:28:35 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:24.135 21:28:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.135 21:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:24.135 21:28:35 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:24.135 21:28:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:24.135 21:28:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:24.135 21:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:25.509 INFO: APP EXITING 00:29:25.509 INFO: killing all VMs 00:29:25.509 INFO: killing vhost app 00:29:25.509 INFO: EXIT DONE 00:29:25.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:26.026 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:26.026 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:26.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:26.594 Cleaning 00:29:26.594 Removing: /var/run/dpdk/spdk0/config 00:29:26.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:26.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:26.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:26.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:26.594 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:26.594 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:26.594 Removing: /var/run/dpdk/spdk1/config 00:29:26.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:26.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:26.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:26.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:26.594 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:26.594 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:26.594 Removing: /var/run/dpdk/spdk2/config 00:29:26.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:26.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:26.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:26.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:26.594 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:26.594 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:26.594 Removing: /var/run/dpdk/spdk3/config 00:29:26.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:26.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:26.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:26.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:26.594 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:26.594 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:26.594 Removing: /var/run/dpdk/spdk4/config 00:29:26.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:26.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:26.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:26.852 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:26.852 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:26.852 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:26.852 Removing: /dev/shm/nvmf_trace.0 00:29:26.852 Removing: /dev/shm/spdk_tgt_trace.pid59495 00:29:26.852 Removing: /var/run/dpdk/spdk0 00:29:26.852 Removing: /var/run/dpdk/spdk1 00:29:26.852 Removing: /var/run/dpdk/spdk2 00:29:26.852 Removing: /var/run/dpdk/spdk3 00:29:26.852 Removing: /var/run/dpdk/spdk4 00:29:26.852 Removing: /var/run/dpdk/spdk_pid59290 00:29:26.852 Removing: /var/run/dpdk/spdk_pid59495 00:29:26.852 Removing: /var/run/dpdk/spdk_pid59705 00:29:26.853 Removing: /var/run/dpdk/spdk_pid59809 00:29:26.853 Removing: /var/run/dpdk/spdk_pid59854 00:29:26.853 Removing: /var/run/dpdk/spdk_pid59982 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60000 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60143 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60341 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60499 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60584 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60678 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60781 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60870 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60915 00:29:26.853 Removing: /var/run/dpdk/spdk_pid60951 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61014 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61120 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61559 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61629 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61696 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61713 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61834 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61854 00:29:26.853 Removing: /var/run/dpdk/spdk_pid61976 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62002 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62063 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62086 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62139 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62163 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62331 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62368 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62449 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62519 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62550 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62628 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62669 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62711 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62752 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62795 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62845 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62886 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62927 00:29:26.853 Removing: /var/run/dpdk/spdk_pid62974 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63020 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63061 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63108 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63149 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63190 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63236 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63283 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63324 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63368 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63418 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63464 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63506 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63587 00:29:26.853 Removing: /var/run/dpdk/spdk_pid63699 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64019 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64032 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64075 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64106 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64128 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64164 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64190 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64218 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64249 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64274 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64307 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64338 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64364 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64391 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64422 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64448 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64481 00:29:26.853 Removing: /var/run/dpdk/spdk_pid64512 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64532 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64565 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64608 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64633 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64675 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64751 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64791 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64813 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64860 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64881 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64901 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64961 00:29:27.112 Removing: /var/run/dpdk/spdk_pid64981 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65027 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65054 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65081 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65103 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65124 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65146 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65174 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65194 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65236 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65280 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65296 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65342 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65364 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65383 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65436 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65465 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65503 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65523 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65542 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65562 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65587 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65601 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65626 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65646 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65727 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65823 00:29:27.112 Removing: /var/run/dpdk/spdk_pid65978 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66029 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66086 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66113 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66147 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66180 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66229 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66251 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66333 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66375 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66463 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66557 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66652 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66700 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66814 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66869 00:29:27.112 Removing: /var/run/dpdk/spdk_pid66920 00:29:27.112 Removing: /var/run/dpdk/spdk_pid67162 00:29:27.112 Removing: /var/run/dpdk/spdk_pid67274 00:29:27.112 Removing: /var/run/dpdk/spdk_pid67315 00:29:27.112 Removing: /var/run/dpdk/spdk_pid67637 00:29:27.112 Removing: /var/run/dpdk/spdk_pid67676 00:29:27.112 Removing: /var/run/dpdk/spdk_pid67995 00:29:27.112 Removing: /var/run/dpdk/spdk_pid68414 00:29:27.112 Removing: /var/run/dpdk/spdk_pid68699 00:29:27.112 Removing: /var/run/dpdk/spdk_pid69542 00:29:27.112 Removing: /var/run/dpdk/spdk_pid70389 00:29:27.112 Removing: /var/run/dpdk/spdk_pid70517 00:29:27.112 Removing: /var/run/dpdk/spdk_pid70596 00:29:27.112 Removing: /var/run/dpdk/spdk_pid71884 00:29:27.112 Removing: /var/run/dpdk/spdk_pid72141 00:29:27.112 Removing: /var/run/dpdk/spdk_pid75474 00:29:27.112 Removing: /var/run/dpdk/spdk_pid75798 00:29:27.112 Removing: /var/run/dpdk/spdk_pid75907 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76047 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76087 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76121 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76155 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76266 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76402 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76578 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76682 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76886 00:29:27.112 Removing: /var/run/dpdk/spdk_pid76988 00:29:27.112 Removing: /var/run/dpdk/spdk_pid77095 00:29:27.112 Removing: /var/run/dpdk/spdk_pid77431 00:29:27.112 Removing: /var/run/dpdk/spdk_pid77788 00:29:27.112 Removing: /var/run/dpdk/spdk_pid77802 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80016 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80019 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80311 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80336 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80352 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80384 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80390 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80482 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80485 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80593 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80603 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80707 00:29:27.371 Removing: /var/run/dpdk/spdk_pid80714 00:29:27.371 Removing: /var/run/dpdk/spdk_pid81111 00:29:27.371 Removing: /var/run/dpdk/spdk_pid81156 00:29:27.371 Removing: /var/run/dpdk/spdk_pid81259 00:29:27.371 Removing: /var/run/dpdk/spdk_pid81330 00:29:27.371 Removing: /var/run/dpdk/spdk_pid81643 00:29:27.371 Removing: /var/run/dpdk/spdk_pid81851 00:29:27.371 Removing: /var/run/dpdk/spdk_pid82247 00:29:27.371 Removing: /var/run/dpdk/spdk_pid82760 00:29:27.371 Removing: /var/run/dpdk/spdk_pid83592 00:29:27.371 Removing: /var/run/dpdk/spdk_pid84208 00:29:27.371 Removing: /var/run/dpdk/spdk_pid84212 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86155 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86223 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86291 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86358 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86504 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86572 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86639 00:29:27.371 Removing: /var/run/dpdk/spdk_pid86706 00:29:27.371 Removing: /var/run/dpdk/spdk_pid87055 00:29:27.371 Removing: /var/run/dpdk/spdk_pid88222 00:29:27.371 Removing: /var/run/dpdk/spdk_pid88376 00:29:27.371 Removing: /var/run/dpdk/spdk_pid88626 00:29:27.371 Removing: /var/run/dpdk/spdk_pid89184 00:29:27.371 Removing: /var/run/dpdk/spdk_pid89346 00:29:27.371 Removing: /var/run/dpdk/spdk_pid89503 00:29:27.371 Removing: /var/run/dpdk/spdk_pid89602 00:29:27.371 Removing: /var/run/dpdk/spdk_pid89773 00:29:27.371 Removing: /var/run/dpdk/spdk_pid89887 00:29:27.371 Removing: /var/run/dpdk/spdk_pid90561 00:29:27.371 Removing: /var/run/dpdk/spdk_pid90592 00:29:27.371 Removing: /var/run/dpdk/spdk_pid90634 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91021 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91056 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91089 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91515 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91532 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91787 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91933 00:29:27.371 Removing: /var/run/dpdk/spdk_pid91947 00:29:27.371 Clean 00:29:27.371 21:28:38 -- common/autotest_common.sh@1451 -- # return 0 00:29:27.371 21:28:38 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:27.371 21:28:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.371 21:28:38 -- common/autotest_common.sh@10 -- # set +x 00:29:27.371 21:28:38 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:27.371 21:28:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.371 21:28:38 -- common/autotest_common.sh@10 -- # set +x 00:29:27.631 21:28:38 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:27.631 21:28:38 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:27.631 21:28:38 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:27.631 21:28:38 -- spdk/autotest.sh@391 -- # hash lcov 00:29:27.631 21:28:38 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:27.631 21:28:38 -- spdk/autotest.sh@393 -- # hostname 00:29:27.631 21:28:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:27.631 geninfo: WARNING: invalid characters removed from testname! 00:29:54.203 21:29:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.736 21:29:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:59.269 21:29:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:02.566 21:29:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:05.130 21:29:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:07.664 21:29:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:10.201 21:29:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:10.201 21:29:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:10.201 21:29:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:10.201 21:29:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.201 21:29:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.201 21:29:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.201 21:29:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.201 21:29:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.201 21:29:21 -- paths/export.sh@5 -- $ export PATH 00:30:10.201 21:29:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.201 21:29:21 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:10.201 21:29:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:10.201 21:29:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720992561.XXXXXX 00:30:10.201 21:29:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720992561.bKG9yx 00:30:10.201 21:29:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:10.201 21:29:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:10.201 21:29:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:10.201 21:29:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:10.201 21:29:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:10.201 21:29:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:10.201 21:29:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:10.201 21:29:21 -- common/autotest_common.sh@10 -- $ set +x 00:30:10.201 21:29:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:30:10.201 21:29:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:10.201 21:29:21 -- pm/common@17 -- $ local monitor 00:30:10.201 21:29:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.201 21:29:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:10.201 21:29:21 -- pm/common@25 -- $ sleep 1 00:30:10.201 21:29:21 -- pm/common@21 -- $ date +%s 00:30:10.201 21:29:21 -- pm/common@21 -- $ date +%s 00:30:10.201 21:29:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720992561 00:30:10.201 21:29:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720992561 00:30:10.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720992561_collect-vmstat.pm.log 00:30:10.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720992561_collect-cpu-load.pm.log 00:30:11.137 21:29:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:11.137 21:29:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:11.137 21:29:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:11.137 21:29:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:11.137 21:29:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:11.137 21:29:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:11.137 21:29:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:11.137 21:29:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:11.137 21:29:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:11.137 21:29:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:11.397 21:29:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:11.397 21:29:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:11.397 21:29:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:11.397 21:29:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:11.397 21:29:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:11.397 21:29:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:11.397 21:29:22 -- pm/common@44 -- $ pid=93728 00:30:11.397 21:29:22 -- pm/common@50 -- $ kill -TERM 93728 00:30:11.397 21:29:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:11.397 21:29:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:11.397 21:29:22 -- pm/common@44 -- $ pid=93729 00:30:11.397 21:29:22 -- pm/common@50 -- $ kill -TERM 93729 00:30:11.397 + [[ -n 5167 ]] 00:30:11.397 + sudo kill 5167 00:30:11.406 [Pipeline] } 00:30:11.427 [Pipeline] // timeout 00:30:11.433 [Pipeline] } 00:30:11.452 [Pipeline] // stage 00:30:11.458 [Pipeline] } 00:30:11.476 [Pipeline] // catchError 00:30:11.486 [Pipeline] stage 00:30:11.488 [Pipeline] { (Stop VM) 00:30:11.502 [Pipeline] sh 00:30:11.784 + vagrant halt 00:30:15.974 ==> default: Halting domain... 00:30:22.551 [Pipeline] sh 00:30:22.828 + vagrant destroy -f 00:30:27.018 ==> default: Removing domain... 00:30:27.028 [Pipeline] sh 00:30:27.306 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:27.316 [Pipeline] } 00:30:27.333 [Pipeline] // stage 00:30:27.338 [Pipeline] } 00:30:27.355 [Pipeline] // dir 00:30:27.360 [Pipeline] } 00:30:27.372 [Pipeline] // wrap 00:30:27.378 [Pipeline] } 00:30:27.388 [Pipeline] // catchError 00:30:27.395 [Pipeline] stage 00:30:27.397 [Pipeline] { (Epilogue) 00:30:27.408 [Pipeline] sh 00:30:27.683 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:35.824 [Pipeline] catchError 00:30:35.826 [Pipeline] { 00:30:35.840 [Pipeline] sh 00:30:36.121 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:36.379 Artifacts sizes are good 00:30:36.389 [Pipeline] } 00:30:36.403 [Pipeline] // catchError 00:30:36.414 [Pipeline] archiveArtifacts 00:30:36.429 Archiving artifacts 00:30:36.637 [Pipeline] cleanWs 00:30:36.648 [WS-CLEANUP] Deleting project workspace... 00:30:36.649 [WS-CLEANUP] Deferred wipeout is used... 00:30:36.655 [WS-CLEANUP] done 00:30:36.657 [Pipeline] } 00:30:36.676 [Pipeline] // stage 00:30:36.681 [Pipeline] } 00:30:36.696 [Pipeline] // node 00:30:36.700 [Pipeline] End of Pipeline 00:30:36.733 Finished: SUCCESS